Being a short history of religion in the United States, wherein an argument is advanced as to its indispensable role…
1. The City Upon a Hill
No other country in history enables us to examine more closely the interaction among religious belief, culture, and public life than the United States of America. To begin with, this is the first and only instance in which we can watch a major Christian community coming into being by the light of documentary sources. America did not mysteriously emerge in prehistory. Its early evolution was not prescriptive. It was born in the clear light of recorded history and its first Christian inhabitants were only too anxious to explain what they were doing, and why.
In a way, the first American settlers were like the ancient Israelites. They saw themselves as active agents of divine providence. They were a chosen people. It was commonly believed among 16th-century English Protestants, and especially by those connected to the sea—ship captains, explorers, navigators, ocean traders, and adventurers—that the English were not only chosen by God, an elect people, but were given by Him a special mission to spread the Gospel overseas. As one of these nautical ideologues, John Davys, put it:
There is no doubt but that we of England are this saved people, by the eternal and infallible presence of the Lord predestined to be sent into these Gentiles in the sea, to those isles and famous kingdoms, there to preach to the people of the Lord: for are not we only set upon Mount Zion to give light to all the rest of the world?. . . It is only we, therefore, that must be these shining messengers of the Lord, and none but we.
This same view was put, in more practical terms, by the poet John Donne, Dean of St. Paul’s, in his sermon to the shareholders of the Virginia Company, who were proposing to settle America in 1622. He told them that, by their energy and investment, “You shall have made this island, which is but as the suburbs of the old world, [into] a bridge, a gallery to the new; to join all to that world that shall never grow old, the kingdom of heaven.” In true biblical style, he assured them: “You shall add persons to this kingdom, and to the kingdom of heaven, and add names to the books of our chronicles, and to the book of life.”
In the event, the settlement of America by English Protestants was to be more than this. It was to be the greatest, indeed the only, realized experiment in post-European Christianity. The birth of Protestant America was a deliberate and self-conscious act of church-state perfectionism. For most of the early settlers, especially in New England, were dissenting groups fleeing the Anglican England of James I where, in their view, the Protestant “reformation” had failed. John Winthrop, the first governor of the Massachusetts Bay Company (formed in 1628), made it clear why they were leaving England and going to America:
All other churches of Europe are brought to desolation, and it cannot be but the like judgment is coming upon us. . . . This land [England] grows weary of its inhabitants, so as man which is the most precious of all creatures, is here more vile and base than the earth they tread upon. . . . We are grown to the height of intemperance in all excess of riot. . . . The fountains of learning and religion are so corrupted . . . that most children, even the best wits and of fairest hopes, are perverted, corrupted, and utterly overthrown by the multitude of evil examples and the licentious government of those seminaries.
Winthrop and his colleagues believed that previous colonies had failed because they were “carnal and not religious.” They dismissed with scorn the Pilgrims who had settled at Plymouth ten years earlier—they were a mere separatist group, looking for a hole to hide in. Winthrop’s band of Puritans stayed in the church to reform it from within. They were aggressive and ambitious, a direct challenge to the existing secular and ecclesiastical government.
Winthrop started a diary on Easter Monday 1630 as his ship, the Arbella, was off the Isle of Wight in England, and like the ancient Israelites, he noted in it all the signs that God supported the enterprise. In a shipboard sermon to the settlers, he explained that they had entered into a collective covenant with God. They were chosen and exemplary: “For we must consider that we shall be as a City upon a Hill, the eyes of all people are upon us.”
As the ship neared America, Winthrop excitedly recorded the signs that offered parallels with the Hebrew Bible: “There came a smell off the shore like the smell of a garden. . . . There came a wild pigeon into our ship, and another small land bird.” Winthrop was delighted to discover and record that, for 300 miles around the point of settlement, the Indians “are swept away by the smallpox . . . so God hath hereby cleared our title to this place.”
It is very significant that the earliest European inhabitants of what was to become the United States should regard themselves as under the special watch and care of God and, as such, designated to be seen by the rest of the world as performing an apostolic role. This image of the City upon a Hill was in time to become secular—the United States was to become the first model republic, the first mass democracy, the first benign world power, and so on. But in its origins the idea was religious. Here was the opportunity to build the first true Christian commonwealth, in the light of the Hebrew Bible and the New Testament, in accordance with apostolic example, and in congruence with the signs given by the Lord Himself.
In 1645, Winthrop delivered a speech setting out the ideology of this holy commonwealth and defining the limitation imposed by religion on the liberties of the people. Man, he insisted, had
a liberty to that only which is good, just, and honest. . . . This liberty is maintained and exercised in a way of subjection to authority, it is the same kind of liberty whereof Christ hath made us free. . . . If you stand for your natural corrupt liberties, and will do what is good in your own eyes, you will not endure the least weight of authority . . . but if you will be satisfied to enjoy such civil and lawful liberties, such as Christ allows you, then you will quietly and cheerfully submit unto that authority which is set over you . . . for your good.
In Winthrop’s opinion, therefore, there could be no question of religion being a “private” matter. It was a public matter, because religious belief, society, and state were inseparable. Government was not just a secular institution but a religious one, too. Another architect of America, William Penn, wrote in 1682 in his Preface to the Frame of Government of Pennsylvania:
Government seems to me a part of religion itself, a thing sacred in its institution and end. . . . It crushes the effects of evil and is as such (though lower yet) an emanation of the same divine power that is both author and object of pure religion, government itself being otherwise as capable of kindness, goodness, and charity as a more private society.
It is important, I think, to grasp that these early settlements of English-speaking America were both individual and collective contracts with God to set up a church-state, not just a religious settlement. The earliest, the Mayflower Compact of 1620, reads:
We, whose names are underwritten . . . having undertaken, for the glory of God, and advancement of the Christian faith, and honor of our king and country, a voyage to plant the first colony in the northern parts of Virginia, do by these presents solemnly and mutually in the presence of God, and of one another, covenant and combine ourself together in a civil body politic, for our better ordering and preservation, and furtherance of the ends aforesaid.
The church was formally constituted in exactly the same manner, as at Salem in 1629:
We covenant with the Lord and with one another; and do bind ourselves in the presence of God, to walk together in all His ways, according as He is pleased to reveal Himself unto us in His blessed word of truth.
Hence the men who founded the English colonies in America drew no distinction between church and state: both were at one in enabling the City upon a Hill to be built.
But not all men were equal. The system of church government was congregationalist. The settlers were divided into church members and nonmembers. Only those of righteous life and true belief were admitted as members, and their righteousness was determined by the minister of religion. And, initially at least, only church members had legislative powers. The system was approvingly described in 1673 by Urian Oakes, later president of Harvard:
According to the design of our fathers and the frame of things laid by them, the interests of righteousness in the commonwealth and holiness in the churches are inseparable. . . . To divide what God hath joined . . . is folly in its exaltation. I look upon this as a little model of the glorious kingdom of Christ on earth. Christ reigns among us in the commonwealth as well as in the church and hath his glorious interest involved and wrapped up in the good of both societies respectively.
It followed that the civil authority had the right to punish religious offenses as well as what we would call secular ones. In Winthrop’s Massachusetts, all, whether freemen or not, had to swear an oath of loyalty to the government and undertake to submit to its authority, whether wielded in religious or in secular matters. The magistrates were officially described as “nursing fathers” (in a reference to Moses’ self-description in Numbers); they were to tackle heresy, schism, and disobedience among the adult “children” of the colony, who were to be “restrained and punished by civil authority.” And this was exerted in ruthless manner to uphold religious truth and public decorum.
In August 1630, for instance, Governor Winthrop accused and convicted Thomas Morton of Boston of “erecting a maypole and reveling.” Morton’s house was burned down and he was put in the stocks while awaiting execution of sentence to be shipped back to England. The following June, Winthrop recorded in his journal that Philip Ratcliffe was whipped and had both his ears cut off for “most foul, scandalous invectives against our churches and government.” Sir Christopher Gardiner was banished for what was described as “bigamy and papism.”
In theory, then, New England might have evolved like Calvin’s Geneva into a fierce fundamentalist theocracy, only on a much bigger scale. But there were many reasons why this did not and could not happen.
In the first place, the colonists were too independent-minded, unruly, and divided among themselves to be docile citizens of a theocracy. Governor Winthrop discovered this for himself, being deposed from office, reelected, and deposed again as public opinion shifted.
Indeed, religious disputes, about the precise mechanism whereby men and women were saved, were the first expression of politics in America. Anne Hutchinson held that man could not prepare himself for election as one of God’s chosen by good works—rather, God bestows grace on the elect through direct revelation. Winthrop thought this a leveling, anti-intellectual creed which, by denying the effectiveness of practical good works, challenged the discipline of the colony. But Governor Vane, who had succeeded Winthrop in office, agreed with Mrs. Hutchinson’s antinomian views. The church in Boston itself was antinomian, the rest orthodox. The first historian of the colony wrote: “It began to be as common here to distinguish between men, by being under a covenant of grace or a covenant of works, as in other countries between Protestants and papists.”
The result was the first real contested election on American soil, which took place on May 17, 1637, at a crowded outdoor meeting in Cambridge. “There was great danger of a tumult that day,” it was recorded; the antinomians “grew into fierce speeches, and some laid hands on others. But seeing themselves too weak [in numbers] they grew quiet.”
In the event, they were defeated by the orthodox party. Governor Winthrop was restored. He promptly expelled Mrs. Hutchinson and others, and had 75 of their supporters disenfranchised and disarmed. He also sent details of his action back to England so that “all our godly friends might not be discouraged from coming to us.”
But not even Governor Winthrop thought it possible to prevent colonists who disagreed with his form of orthodoxy from creating their own religious enclaves. The year before Anne Hutchinson was expelled, another dissident, Roger Williams, who took the view that church and state ought to be separated entirely, was secretly warned by Winthrop (then out of office) that a plan was afoot to deport him to England. Winthrop “privately wrote to me,” said Williams, to slip off from Salem and “to steer my course to Narrangansett Bay and the Indians, for many high and heavenly public ends, encouraging me, from the freeness of the place from any English claims and patents.”
Thus it was that Williams founded Providence, Rhode Island, termed by the orthodox “the sewer of New England.” Writing of his new colony, Williams laid down: “I desired it might be a shelter for persons distressed for conscience.” In 1644 he published his defense of religious freedom, The Bloody Tenet of Persecution for the Cause of Conscience Discussed, and his new instrument of government declared that “the form of government established in Providence Plantations is Democratical, that is to say, a government held by the free and voluntary consent of all, or the greater part, of the free inhabitants.” Williams listed the various laws and penalties for specific transgressions but added:
And otherwise than thus, what is herein forbidden, all men may walk as their consciences persuade them, every one in the name of his God. And let the saints of the Most High walk in this colony without molestation, in the name of Jehovah their God, for ever and ever.
This was confirmed by royal charter in 1663:
No person within the said colony, at any time hereafter, shall be in any wise molested, punished, disquieted, or called in question, for any differences in opinion in matters of religion, and who do not actually disturb the civil peace of our said colony; but that all . . . may from time to time, and at all times hereafter, freely and fully have and enjoy his and their own judgments and consciences, in matters of religious concernments.
This was the first commonwealth to make religious freedom, as opposed to a mere degree of toleration, the principle of its existence, and to make this a reason for separating church and state. Its existence of course opened the doors to the Quakers and the Baptists, and indeed to missionaries from the Congregationalists of the North and the Anglicans of the South.
Hence, in addition to the principles of religious freedom and separation of church and state, Rhode Island introduced the practice of religious competition. In a sense, it became the prototype for the future United States. But it is important to note that, as such—and particularly in its secularism—it was a minority breakaway from the prevailing orthodoxy. And even Williams’s secularist doctrine was couched in the phraseology of Christian fundamentalism.
The ease with which Roger Williams broke from orthodoxy to found his own free colony illustrates the central geographical fact of American religious history: the country was too big to enable any form of orthodoxy to triumph—its very vastness made heterodoxy possible.
Strict Calvinists were not the only believers who felt persecuted in 17th-century England. The Roman Catholics had many more legal disabilities to bear and in 1632 one of them, George Calvert Lord Baltimore, secured a charter from Charles I to found his own colony on their behalf, the future Maryland, called after Charles I’s papist wife, Henrietta Maria.
Father Andrew White, who kept the first diary of the settlement, adopted the same providential approach as Governor Winthrop. He too saw his fellow settlers as chosen people under divine dispensation, and Chesapeake Bay as the promised land. “This bay is the most delightful water I ever saw,” he enthused. The land was “sweet, firm, and fertile.” There were plenty of fish, fine woods of walnuts, oaks, and cedars, “salad-herbs and suchlike,” strawberries, raspberries, mulberry vines, rich soil, “delicate springs of water,” partridge, deer, turkeys, geese, ducks—and delightful squirrels, eagles, and herons—“the place abounds not only with profit but pleasure.” Moreover, Maryland, being halfway between the extremes of Virginia and New England, “has a middle temperature between the two and enjoys the advantages, and escapes the evils, of each.” Thus God, he concluded, had been generous to his Catholic Englishmen, and had indeed set them up in a land of milk and honey.
However, the Maryland Catholics, though they might feel they were chosen people too, had no intention of repeating the competitive persecutions of the Old World. When in the 1640’s the outbreak of civil war in England threatened to involve their colony in religious strife, they deliberately joined forces with persecuted Puritans to reaffirm a policy of toleration. In 1649 they passed a solemn Act Concerning Religion, later known as the Toleration Act, which not only affirmed the right of all Christians to practice their faith in peace but—in a distant adumbration of political correctness—imposed heavy fines on those who jeered at the religion of others by using offensive terms, “such as heretic, schismatic, idolator, puritan, independent, Presbyterian, popish priest, Jesuit, Jesuited papist, roundhead, and separatist.”
There were, it is true, limits even to the toleration of the Marylanders: you could also be fined for denying the existence of God, repudiating the Trinity, or blaspheming against Christ, and a Jew might get into trouble (though few did). But the Toleration Act was an amazing document for its time, since it laid down that henceforth no Christian “would be any ways troubled, molested, or discountenanced for or in respect of his or her religion nor in the free exercise thereto”; and no man or woman could be “any way compelled to the belief or exercise of any other religion against his or her consent.”
Of course, all was not clear sailing. The principles of freedom and toleration have always had to be fought for not once but again and again, even in America. Maryland fell first into the hands of Puritan extremists, then came under the sway of Anglican bigots, who made acceptance of Anglican oaths a prerequisite for office, so that Catholic descendants of the original settlers had to convert in order to sit in the assembly. But Anglican rule was superficial, especially in Baltimore which for much of the 18th century was the fastest-growing city in America, with Christians of all denominations—and many sects way beyond any customary definition of Christianity, as well as Jews—practicing their religion freely.
Again, the principle of vastness enabled the Quakers, who were fiercely persecuted in parts of New England—it was customary to strip their women to find marks of witchcraft—and were sometimes bullied even in Maryland, to found their own colony in Pennsylvania.
Pennsylvania has sometimes been called the key colony in American history. It was the last great flowering of Puritan political innovation. At its heart was the City of Brotherly Love. The harbor of Philadelphia led to Pittsburgh at the gateway to the Ohio Valley and the West, and astride the valleys to the back-country of the South, so that it was the national crossroads. Hence it became, simultaneously, the center of Quaker influence throughout the world, a stronghold of Presbyterianism, the headquarters in America of the Baptists, an Anglican center, a place where many important German religious sects—Moravians, Mennonites, Lutherans, German Reformed, etc.—established their headquarters, and yet a place where large numbers of Catholics and Jews were tolerated.
Pennsylvania was also the home of the first African Methodist Episcopal Church, the earliest independent black body in America, as well as an area where deists, the earliest Unitarians, and even humanists could feel safe and at home. Philadelphia was the seat of the American Philosophical Society and, granted its religious and nonreligious composition, it is no accident that it was the city which gave birth to the libertarian and separatist principles embodied in the Declaration of Independence and the United States Constitution.
But in addition to being contentious and vast, there was a third reason why America evolved into a multireligious society. Most of all, it was, in the deepest sense, non-clerical. This was the real reason why it could never have become a theocracy—the clergy never had the power to impose one.
Right from the start, and even in New England, America gave the clergy themselves less actual authority than they enjoyed under any other government in the Western world at the time. The minister’s power lay in determining church membership—and stopped there. The churches were always managed by laymen. Hence the religious establishment, such as it was, was popular, not hieratic. This was the foundation of the distinctive American religious tradition. There was never any sense of division in law between laymen and clerics, between those with spiritual privileges and those without—no jealous juxtaposition, and therefore no confrontation, of a secular with an ecclesiastical world.
In other words, America was born Protestant and did not have to become so through revolt and struggle. It was not built on the remains of an all-embracing Catholic church, or a Protestant establishment. It had no clericalism or anticlericalism.
In all these respects America differed profoundly from the European world which had been shaped by the principles of St. Augustine, who had set down “Compel them to come in” as the totalitarian principle of a compulsory and inclusive Christian society. America, by contrast, had a traditionless tradition, starting afresh with a set of biblical principles, taken for granted, regarded as self-evident, as the basis for a common national creed.
These assumptions inevitably became more moral than doctrinal as the American colonies expanded and enriched themselves with more human raw material from Europe. Economic factors reinforced the drift toward diversity, religious liberty, and the noninterference of the state in religious matters. The later waves of immigrants had not, for the most part, experienced “conversion” and “saving grace.” They tended, increasingly, to be a mere cross section of ordinary Englishmen (and later of Ulstermen and Scottish Presbyterians).
Even in New England this fact of life had to be accommodated. In 1662 its synod declared that baptism was sufficient for church membership (not for full communion). This was denounced as a “Half-Way Covenant” by the elect, the beginning of the end of a “pure” church—and calamitous Indian attacks were interpreted as a sign of divine displeasure.
In 1679 it was decided to make “a full inquiry . . . into the cause and state of God’s controversy with us.” A “Reforming Synod” was set up and reported: “That God hath a controversy with His New England people is undeniable, the Lord having written His displeasure in dismal characters against us.” A new covenant was produced; but historical events moved against the elect. The triumph of Anglicanism in England weakened Calvinism across the Atlantic, not least by imposing a franchise based on property rather than on church membership. And Puritan church leadership was discredited by the witchcraft mania at Salem in 1692, and weakened by the powerful backlash of public remorse that followed it.
Then, too, the merchant element of Boston, who loathed the strict interpretation of Scripture, especially the commercial restrictions derived from the Pentateuch, published in 1699 a manifesto “on broad and catholic” lines, which accorded full status to any who simply professed Christian belief.
In 1702 Cotton Mather, spokesman for the old elect, published his Magnalia Christi Americana, documenting “Christ’s great deeds in America,” and felt he was bound to conclude: “Religion brought forth prosperity, and the daughter destroyed the mother. . . . There is danger lest the enchantments of this world make them forget their errand into the wilderness.” But by this time the wilderness had become an increasingly rich country, and the original Calvinist monopoly of New England had gone for good. The liberal elements captured Harvard in 1707 and founded Yale at New Haven nine years later.
Yet the collapse of the idea of a total Christian society in America did not lead to secularism. In America as a whole, religion continued to be the dynamic of society and history. The difference was that Christianity now became a voluntary movement, or series of movements, rather than a compulsory framework. And it was these movements which determined the shape of America’s constitutional and social development.
Here we come to the most important point of all. The multiplicity of America’s religious structure, and the continuance of the millenarian ideal, gave religious revivalism the opportunity to act as a unifying, national force. The revival known as the Great Awakening, which began in 1719 and continued for the next quarter-century and more, was the formative event in the history of the United States, preceding the movement for independence and making it possible.
The Great Awakening crossed all religious and sectarian boundaries, made light of them indeed, and turned what had been a series of European-style churches into American ones. It would almost be true to say that it created an ecumenical American type of religiosity which affected all groups: certainly it gave a distinctive American flavor to a wide range of denominations. This might be summarized under the following heads: evangelical vigor; a tendency to downgrade the clergy; little stress on liturgical correctness, or on parish boundaries; and above all an emphasis on individual spiritual experience. Its key text was Revelations 21:5, “Behold, I make all things new,” which was also the text for the American phenomenon as a whole.
The great awakening was a much more complicated phenomenon than similar European movements such as John Wesley’s revival in England, since it combined rumbustious and unsophisticated mass evangelism with the ideas of the 18th-century Enlightenment. Revivals on both sides of the Atlantic shared a distrust of doctrine, a stress on morality and ethics as opposed to dogma, an ecumenical spirit. The Awakeners agreed with Wesley when he declared: “I . . . refuse to be distinguished from other men by any but the common principles of Christianity. . . . Does thou love and fear God? It is enough! I give thee the right hand of fellowship!” But Jonathan Edwards, the most influential of the Great Awakeners, was also in the mainstream of the intellectual tradition of Erasmus. He stressed reason and natural law, rather than theological distinctions, as the guide to Christian belief and conduct.
Edwards said he read John Locke’s Essay Concerning Human Understanding with more pleasure “than the most greedy miser finds when gathering up handfuls of silver and gold from some newly-discovered treasure.” But he brought to Locke’s presentation of the case for reasonable Christianity the warmth and emotionalism it lacked. That might be termed providential. Locke was writing after a successful revolution—the Glorious Revolution of 1688—and Edwards before one, at a time when unifying and energizing emotions were needed to create a popular will for change.
Much of Edwards’s writing seems to strike political as well as theological notes. He sought in his preaching to arouse what he called “affections,” which he defined as “that which moves a person from neutrality or mere assent and inclines his heart to possess or reject something.” This was the message of his widely-read book, A Treatise Concerning Human Affections (1746), where Edwards argued strongly that the deeds of men were caused by God’s will. There was thus no essential difference between a religious and a political emotion—both were God-directed. A man was born again not just as an active Christian but as an active libertarian and republican.
Rationalist Edwards might be, but he was also a millenarian. He wrote that in human history, “all the changes are brought to pass . . . to prepare the way for the glorious issue of things that shall be when truth and righteousness shall finally prevail.” Men must know the hour when God “shall take the kingdom,” and he looked toward “the dawn of that glorious day.” In his last work on original sin (1758), he prophesied that there was no reason why God “may not establish a constitution whereby the natural posterity of Adam, proceeding from him, much as the buds or the branches from the stock or root of a tree, should be treated as one with him.”
It was against this exciting eschatological background that the Great Awakening took off, being reanimated whenever it showed signs of flagging by the advent of new and spectacular orators, such as Wesley’s friend George Whitefield, known as “the Grand Itinerant.” He seems to have had the gift of tongues—German converts said they could get his message even though they understood little or no English. He preached, as he put it, “with much Flame, Clearness, and Power. . . . Dagon falls daily before the Ark.” When he left Boston, he was succeeded by a native evangelist, Gilbert Tennent, who caused a jealous Anglican to record bitterly: “People wallowed in snow, night and day, for the benefit of his beastly brayings.”
Another Awakener who helped to “blow up the divine fire lately kindled” was John Davenport of Yale, who was arrested and judged mentally disturbed at one point when he called for articles of luxury such as wigs, cloaks, and rings, as well as many books on religion, to be thrown into the fire. It was the beginning of American personal evangelism, which continues to this day. Not everyone liked it, then as now. Its roots were in the country areas, where it helped to democratize society and aroused opposition to the restrictions of royal government. But it also took fire in the towns, where hearers fainted, wept, shrieked, and generally gave vent to their “affections”—almost exactly as they do at the more extreme kind of popular revivalist services today. The noises, trances, and contortions, so far as we can see, were identical.
It was the marriage between the rationalism of the American elites touched by the 18th-century Enlightenment and the spirit of the Great Awakening among the masses which enabled the popular enthusiasm thus aroused to be channeled into the political aims of the Revolution—itself soon identified as the coming eschatological event. Neither force could have succeeded without the other.
Nor is the American Revolution conceivable without the religious background. The difference between the American Revolution and the French Revolution is that the American Revolution, in its origins, was a religious event, whereas the French Revolution was an antireligious event. As John Adams was to put it long afterward, in 1818: “The Revolution was effected before the war commenced. The Revolution was in the minds and hearts of the people: and change in their religious sentiments of their duties and obligations.”
We must remember that until the mid-18th century at least, America was a collection of disparate colonies with little contact with one another and often (as in all Latin America then and later) having more powerful links with cities and economic interests in Europe than with neighboring colonies. Religious evangelism was the first continental force, an all-American phenomenon which transcended colonial differences, introduced truly national figures, and made colonial boundaries seem unimportant. Whitefield was the first “American” public figure to be well-known from New Hampshire to Georgia. When he died in 1770, there was comment from the entire colonial press. Thus the form of ecumenicalism based on religious enthusiasm preceded, and shaped, political unity.
Of course religious unity was not complete. The British authorities were encouraged to resist demands for change, and later for independence, by the existence of a powerful loyalist sentiment which, they fondly believed, represented majority opinion. This centered around the Anglican church. Their clergy, and especially their missionary clergy, hated the Great Awakening, and tried hard and with some success to isolate their congregations from it. They likewise hated the Revolution which it bred. A leading New York City Anglican, Charles Inglis, called the rising “the most causeless, unprovoked, and unnatural [rebellion] that ever disgraced any country.”
Numerically, Anglicanism was not at all negligible. It had 406 churches. But the Congregationalists had 749 and the Presbyterians 495. Culturally, these two groups were dominant. They were separated chiefly by their forms of church governance and their areas of settlement. But they saw eye to eye on most things, especially the question of political independence, and they worked closely together when they wanted. If you include the Congregationalists with the Presbyterians, then King George Ill’s remark that the American Revolution was essentially “a Presbyterian rebellion” makes a lot of sense.
To be sure, they were joined by other groups. The Dutch and the German sects felt no loyalty to the English crown and it posed no crisis of conscience to them to resist authority—so they rebelled. The Catholics, too, felt no loyalty to the House of Hanover, which maintained a penal regime against their coreligionists in England. The Baptists and the Methodists, who had expanded rapidly during the Great Awakening, joined the rebel armies in vast numbers. The Quakers would not exactly fight, but Benjamin Franklin persuaded them and other pacifists to serve as a kind of civil-defense force. The Anglicans were the hard core of loyalism—and there were not enough of them.
So American freedom and independence were brought about essentially by a religious coalition, which provided the rank and file of a movement led by a more narrowly based elite of Enlightenment men. John Adams, who had lost his original religious faith, nonetheless recognized the essential role played by religion in unifying the majority of the people behind the independence movement and giving them common beliefs and aims:
One great advantage of the Christian religion is that it brings the great principle of the law of nature and nations, love your neighbor as yourself, and do to others as you would have that others should do to you—to the knowledge, belief, and veneration of the whole people. Children, servants, women, and men are all professors in the science of public as well as private morality. . . . The duties and rights of the man and the citizen are thus taught from early infancy.
What in effect John Adams was implying, albeit he was a secularist and a nonchurchman, was that the form of Christianity which had developed in America was a kind of ecumenical and unofficial state religion, a religion suited by its nature, not by any legal claims, to be given recognition by the republic because it was itself the civil and moral creed of republicanism.
Hence, though the Constitution and the Bill of Rights made no provision for a state church—quite the contrary—there was an implied and unchallenged understanding that America was a religious country, that the republic was religious not necessarily in its forms but in its bones, that it was inconceivable that it could have come into existence, or could continue and flourish, without an overriding religious sentiment pervading every nook and cranny of its society. This religious sentiment was based on the Scriptures and the Decalogue, was embodied in the moral consensus of the Judeo-Christian tradition, and manifested itself in countless forms of mainly Christian worship.
Since American religion was a collection of faiths, coexisting in mutual tolerance, there was no alternative but to create a secular state entirely separated from any church. But there was an unspoken understanding that, in an emotional sense, the republic was not secular. It was still the City upon a Hill, watched over and safeguarded by divine providence, and constituting a beacon of enlightenment and an exemplar of conduct for the rest of the world.
This is what President Washington clearly intended to convey in the key passage of his farewell address of 1796. Though he was careful to observe the constitutional and secularist forms, the underlying emotion was plainly religious in inspiration. He implied, indeed, that the voice of the American people was a providential one, and that in sustaining him both as their general and their first President, and enabling the republic to be born and to survive and flourish, it had been giving expression to a providential plan:
Profoundly penetrated by this idea, I shall carry it with me to my grave, as a strong incitement to unceasing vows that heaven may continue to you the choicest token of its beneficence—that your union and brotherly affection may be perpetual—that the free Constitution, which is the work of your hands, may be sacredly maintained—that its administration in every department may be stamped with wisdom and virtue—that in fine the happiness of the people of these states, under the auspices of Liberty, [may be preserved] by so careful a preservation and so prudent a use of this blessing, as will acquire to them the glory of recommending it to the applause, the affection, and adoption of every nation, which is yet a stranger to it.
In Washington’s world view, then, the city was still upon a hill, the new nation was still elect, its creation and mission were providential, or as he put it, “sacredly maintained,” under heaven, the recipient of a unique “blessing” in the historical plan of the deity for humanity. That is not so far from Governor Winthrop’s view, though so much had happened in the meantime; and it would continue to be the view of the American majority for the next century and a half.
2. The Moral Theology of the Melting Pot
Alexis de Tocqueville in his Democracy in America, published in 1835, said that the first thing which struck him in the United States was the attitude of, and toward, the churches. At first he found it almost incredible:
In France I had almost always seen the spirit of religion and the spirit of freedom pursuing courses diametrically opposed to each other: but in America I found that they were intimately united, and that they reigned in common over the same country.
He added: “Religion . . . must be regarded as the foremost of the political institutions of [the United States]; for if it does not impart a taste for freedom, it facilitates the use of free institutions.” And Americans, he concluded, held religion “to be indispensable to the maintenance of republican institutions.”
Many Americans of religious bent saw American religion as much more than this, much more than a merely defensive force. It was progressive. In the century after the Great Awakening, the two formative sects of American Protestantism, Presbyterianism and Congregationalism, ceased to be dominant, and—in numbers, at any rate—the Wesleyans and the Baptists took over.
In New England, under the impact of the Enlightenment, many well-educated Presbyterians became Unitarians, and it was the Unitarians of New England who created the so-called American Renaissance, centered around the North American Review (1815) and the Christian Examiner (1824). These were periodicals whose editors included William Emerson (the father of the poet and essayist), Richard Henry Dana, James Russell Lowell, Henry Adams, and Edward Everett Hale. Harvard—with a staff including John Quincy Adams, Henry Wadsworth Longfellow, Lowell, and Oliver Wendell Holmes—was Unitarian in spirit.
Unitarianism was to a great extent the religion of the elite, critics joking that its preaching was limited to “the fatherhood of God, the brotherhood of Man, and the neighborhood of Boston.” Actually, it traced its pedigree not so much to the Pilgrim Fathers as to Erasmus himself, who saw true Christianity in full alliance with the Renaissance. William Ellery Channing summed up this argument for progressive religion:
Christianity . . . should come forth from the darkness and corruption of the past in its own celestial splendor and in its divine simplicity. It should be comprehended as having but one purpose, the perfection of human nature, the elevation of men into nobler beings.
In this progressive, religious process, the prime instrument was the American republic itself. That was what Jonathan Edwards had predicted in 1740:
It is not unlikely that this work of God’s spirit, that is so extraordinary and wonderful, is the dawning or at least the prelude of that glorious work of God so often foretold in Scripture, which in the progress and issue of it shall renew the world of mankind. . . . And there are many things which make it probable that this work will begin in America.
To the Unitarian elite it was obvious that the work had already begun. In fact, the old Calvinist theory of the elect nation infused American patriotism in the 19th century. As Longfellow put it:
Sail on, O Union, strong and great!
Humanity with all its fears,
With all the hopes of future years,
Is hanging breathless on thy fate.
Within the framework of this 19th-century version of the chosen people, or what was termed “the favoring providence” at work by using America as its “melting pot”—a new nation being mingled and molded from the debris of the old—American Christianity and the republic it infused acquired their modern characteristics.
America’s most typical churches tended to look back from the 19th century straight to the New Testament, dismissing the totalitarianism of the Middle Ages and the age of religious wars as nightmares which had little to do with true religion. They refused to associate Christianity with compulsion in any form. The assumption of the voluntary principle, the central tenet of American Christianity, was that the personal religious convictions of individuals, freely gathered in churches and acting in voluntary associations, would gradually and necessarily permeate society by persuasion and example. Thus the world was seen primarily in moral terms.
This became a dominant factor whether America was rejecting the Old World and seeking to quarantine itself from it—a concept epitomized by the Monroe Doctrine and invoked as recently as during the Cuban missile crisis—or whether America was embracing the world and seeking to reform it. It was characteristic of the American state first to reject espionage on moral grounds, then to undertake it through the Central Intelligence Agency, a moralistic institution which perhaps had less in common with its Soviet equivalent than with St. Ignatius of Loyola’s Society of Jesus.
In American religion, the reflective aspect of Christianity was subordinated, almost eclipsed. The old medieval emphasis on the perfection of God, and of man’s mere contemplation of Him, was replaced by the idea of God as an exacting and active sovereign, and man’s energetic service in His employment. It was not the Christian’s duty to accept the world as he found it but to seek to make it better, using all the abundant means God had placed at his disposal. There was little mysticism, little sacramentalism or awe before the holy. There was no place for tragedy, dismissed as an avoidable accident, and its consequences as remediable.
American religion, in its formative period, owed nothing to writers like Pascal. For essential purposes it had no detailed theology at all. All agreed that theological matters were points on which various religions and sects happened to differ. This aspect of religion was important to individuals but not to society and the nation, since what mattered to them was the deep Christian consensus on ethics and morality. So long as Americans agreed on morals, theology could take care of itself. Morals became the heart of religion, whether for Puritans or revivalists, orthodox or liberal, fundamentalist or moralist—the eccentric hot-gospeler at the street corner shared in this consensus as wholeheartedly as the Episcopalian prelate.
Moreover, this was a consensus which even non-Christians, deists, and rationalists could share. Non-Christianity, preeminently including Judaism, could thus be accommodated within the national framework of American Christianity. It could even accommodate Roman Catholicism. Both American Catholicism and American Judaism became heavily influenced by the moral assumptions of American Protestantism, because both accepted its premise that religion (meaning morality) was essential to democratic institutions.
Now here we arrive at a crucial stage in the development of American Christianity. In most earlier Christian societies, education had been a monopoly of the clergy, and in America, too, the Pilgrim Fathers saw education and faith as inseparable. Communal schools were established in Boston as early as 1635, and in 1647 the Massachusetts General Court passed a law requiring towns within its jurisdiction to set up public schools. Harvard had been founded eleven years earlier.
These institutions were run entirely by religious bodies, were instruments of the church, and were designed to serve religion. The pattern varied but the principle was the same throughout the early states. Virginia set up in 1661 the future College of William and Mary in these terms:
Whereas the want of able and faithful ministers deprives us of those great blessings and mercies that always attend upon the service of God, be it enacted that for the advance of learning, education of youth, supply of the ministry, and promotion of piety, there be land taken up or purchased for a college and free school.
This tendency was reinforced during the 18th-century Great Awakening.
However, at about the same time, American Christian rationalists were making their own contribution. Benjamin Franklin’s Proposal Relating to the Education of Youth in Pennsylvania (1749) put forward a scheme to treat religion as one subject in the curriculum and relate it to character-training. Similar theories were advanced by Jonathan Edwards when president of Princeton. This was the solution adopted when the modern American public-school movement, directed by Horace Mann, came into existence in the 19th century. The state took over financial responsibility for the education of the new millions by absorbing all primary and secondary schools, but not (after the Dartmouth Decision of 1819) higher education, where independent colleges survived side by side with state universities.
Thus the true American public school was non-sectarian from the very beginning. But it was not nonreligious. Mann thought that religious instruction should be taken “to the extremist verge to which it can be carried without invading those rights of conscience which are established by the laws of God, and guaranteed by the constitution of the state.”
What the schools got was not so much non-denominational religion as a kind of generalized Protestantism based on the Bible. As Mann wrote in his final report:
That our public schools are not theological seminaries is admitted. . . . But our system earnestly inculcates all Christian morals; it founds its morals on the basis of religion; it welcomes the religion of the Bible; it allows it to do what it is allowed to do in no other system, to speak for itself.
Hence in the American system, the school supplied Christian “character-building” and the parent at home topped it off with sectarian trimmings.
There were disadvantages in this system. The Reverend F. A. Newton expressed one of them on behalf of some Episcopalians:
A book upon politics, morals, or religion, containing no party or sectarian views, will be apt to contain no distinctive views of any kind, and will be likely to leave the mind in a state of doubt or skepticism, much more to be deplored than any sectarian bias.
Another objection, as America increasingly took on the characteristics of a secular state—which it had been by definition from the start—and as it accepted millions of immigrant Catholics, Jews, and other non-Protestants, was the association of moral character-building in the schools with specifically Protestant labels. Therefore, gradually, and particularly in the cities, religion as such was eased out of the schools. The Presbyterian leader Samuel T. Spear put it thus in 1870:
The state, being democratic in its Constitution, and consequently having no religion to which it does or can give any legal sanction, should not and cannot, except by manifest inconsistency, introduce either religious or irreligious teaching into a system of popular education which it authorizes, enforces, and for the support of which it taxes all the people in common.
But something had to supply the cultural machinery by which the immigrant millions were turned into Americans; and, Spear added, the schools had to have some spiritual foundation. Since the state was not Christian but republican, republicanism should constitute that foundation. The solution was neat because in effect republicanism was itself based upon the old Protestant moral and ethical consensus, which was what the schools already taught—the two concepts stood or fell together. So in this manner the American way of life began to function as the operative creed of the public schools and it was gradually accepted as the official philosophy of American state education.
Horace M. Kallen, writing in July 1951 in the Saturday Review under the title “Democracy’s True Religion,” summarized the theory: “For the communicants of the democratic faith, it is the religion of and for religion. For being the religion of religions, all may freely come together in it.” When in 1952 J. Paul Williams published What Americans Believe and How They Worship, he spelled out the ideology in more detail:
Americans must come to look upon the democratic ideal . . . as the Will of God, or, if they please, of Nature. . . . Americans must be brought to the conviction that democracy is the very Law of Life . . . government agencies must teach the democratic idea as religion. . . . Primary responsibility for teaching democracy might be given to the public school. . . . The churches deal effectively with but half the population, the government deals with all the population. . . . It is a misconception to equate separation of church and state with separation of religion and state.
It was on the basis of such assumptions, imperfectly carried out though they might be, that the two great non-Protestant religions of America, the Catholic and the Jewish, became to some extent Protestantized, thereby aligning the political ideals and practices of the United States with a broad-based form of Christianity.
The system could work granted two preconditions. The first was what might be termed a high level of religiosity in the nation. Religious enthusiasm must be continually replenished to make the ethical and moral ideology seem important. This was supplied by the American system of creedal plurality. Having abandoned the advantages of unity, the Americans sensibly turned to exploiting the advantages of diversity—and these proved to be considerable. It was the very competitiveness of rival religions in the United States, acting by analogy to the free-enterprise system, which kept the demands of the spiritual life constantly before the people, producing an atmosphere of perpetual revival.
This was especially true along the expanding frontier and in the areas of 19th-century settlement. The second Great Awakening, starting in the 1790’s, continued until the middle decades of the 19th century. The Wesleyans and Baptists spawned multitudes of cults and subcults, and the camp meeting became, for several decades, the characteristic form of American religious experiment.
There was nothing exactly new in this form of religious enthusiasm. In the time of the ancient Israelites, prophets and other God-exalted preachers had led multitudes into the wilderness for instruction and worship. The apostles themselves had first learned to “speak with tongues” at the time of Pentecost. The camp meeting reproduced the goings-on of the 2nd-century Montanists. But it was only in America that this type of religious performance involved literally millions of believers and became a permanent part of the national religious heritage.
A great and typical meeting was held at Cane Ridge in Kentucky in August 1801. The Presbyterian pastor who organized it, Barton Stone, left a description of what happened. What is interesting about these exercises, as he calls them, of the “saved,” is that they had already taken place, in an identical manner, during the first Great Awakening 70 years before, and can still be witnessed, at meetings of American charismatics today, nearly 200 years later.
For instance, there was the falling exercise: “The subject of this exercise would, generally with a piercing scream, fall like a log on the floor, earth or mud, and appear as if dead.” Then there were the jerks: “When the head alone was affected, it would be jerked backward and forward, or from side to side, so quickly that the features of the face could not be distinguished.” This led to the barking exercise: “A person affected by the jerk would often make a grunt or bark from the suddenness of the jerk.” Then there was the dancing exercise, or solo automative dancing, while “the smile of heaven shone in the countenance of the subject.” The laughing exercise produced “loud, hearty laughter. . . . The subject appeared rapturously solemn, and his laughter excited solemnity in saints and sinners. It is truly indescribable.” Then there was a running exercise and a singing exercise, “not from the mouth or nose but entirely in the breast, the sounds issuing from thence—such music silenced everything.”
In Europe, sects which practiced such antics had always been closely watched by the authorities, ecclesiastical and secular, and sometimes harassed, dispersed, and persecuted. In America they were allowed to manifest themselves, for the first time in history, virtually without supervision by the state or by a state church.
There were hundreds of such politico-religious communities in 19th-century America. As Emerson wrote to Thomas Carlyle in 1840: “We are all a little wild here with numberless projects of social reform. Not a reading man but has a draft of a new community in his waistcoat pocket.”
One of the most rational such communities was Brook Farm in West Roxbury, Massachusetts, founded by a Boston Unitarian, George Ripley. It included the novelist Nathaniel Hawthorne on its agriculture committee, produced books, pottery, and furniture as well as its own food supplies, and ended in bankruptcy. (Hawthorne later lampooned the community in The Blithedale Romance.)
Many Central and East European sects also established themselves successfully, and some still flourish today. Others mutated. A German pietist group settled at Harmony, Pennsylvania, in 1804, practiced confession, opposed procreation and marriage, and dogmatized itself out of existence. Another, the Oneida Community of New York State, combined socialism with free love and brought up its children communally in a sort of kibbutz, but stumbled on a method of making steel traps, eventually becoming a successful Canadian corporation and losing its faith.
Other sects became gnostic—that is, they claimed to have discovered secret codes, texts, or systems of knowledge which provided keys to salvation. They tended to part company with Christianity since they replaced the Scriptures with arcane documents of their own.
In about 1827, for example, Joseph Smith, Jr. was given by the angel Moroni a new bible in the form of golden plates inscribed in “reformed Egyptian” hieroglyphics with a set of seer-stones, called by the biblical name Urim and Thummim, with which to read them. The Book of Mormon, as Smith translated it, was put on sale in 1830, after which the angel removed the plates. Smith was “providentially” murdered by a mob in Illinois in 1844, after which Brigham Young was able to take the sect on a great exodus to Salt Lake City in 1847.
Under the First Amendment, “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof . . . ,” sects did not become unlawful if they merely offended Christian dogma. But Christian morals and social customs—and their expression in common and statutory law—were a different matter, and Mormonism was in continual battle with the state until it renounced polygamy in 1890.
Gnosticism was thus perfectly acceptable within the American voluntary Christian society, but only provided that it genuflected to bedrock Christian morality, in which monogamous marriage was a central axiom.
It was subject to a similar qualification that Catholicism was tolerated. It was not so much forced to change itself as to develop a highly defensive posture, which to some extent came to the same thing.
To many Protestants, a number of Catholic institutions infringed the moral consensus in spirit, even if they did not actually defy it legally, as Mormon polygamy did. One example was convents of nuns, the object of a campaign by the Protestant Vindicator, founded in 1834.
In that same year a Boston Protestant mob burned down an Ursuline convent and those responsible were acquitted—Protestant juries seem to have swallowed the rumor that Catholic convents were subterranean dungeons for the murder and burial of illegitimate children conceived by the lascivious nuns.
The next year saw the publication in Boston of Six Months in a Convent, and in 1836 Maria Monk’s Awful Disclosures of the Hotel Dieu Nunnery in Montreal, written by a group of New York anti-Catholics. This was followed by Further Disclosures and The Escape of Sister Frances Patrick, Another Nun from the Hotel Dieu Nunnery in Montreal. Maria Monk herself was arrested for picking pockets in a brothel and died in prison in 1849. But her book had sold 300,000 copies by 1860 and is still in print in various versions today.
There were also fears of a Catholic political and military conspiracy, fears which the Pilgrims had brought with them in the Mayflower. In the 1830’s, Lyman Beecher’s Plea for the West revealed a plot to take over the Mississippi Valley, the Emperor of Austria being in league with the Pope to promote it. Samuel F.B. Morse, the inventor of the telegraph—who had harbored anti-Catholic feelings ever since he had failed during a visit to Rome to doff his hat to a papal procession and one of the Swiss guards had knocked it off—argued that the reactionary kings and emperors in Europe were deliberately contriving to swamp Protestant America by forcing Catholics to immigrate there. This conspiracy theory was made more plausible by the fact that, during the 1850’s, America’s population rose by 50 percent, more than a third of the increase being due to immigration, and much of that immigration consisting of Catholics.
The Catholic issue came into national politics with the emergence of the secretive ultra-Protestant American party, whose “I don’t know” answer to a key question led to their popular title, the Know-Nothings. Maria Monk’s book was termed “the Uncle Tom’s Cabin of Know-Nothingness” and the party became a national force before being merged into the Republican party in 1854.
It was notable that, whereas the Republican party became identified with the antislavery issue, the Roman Catholic hierarchy tended to remain noncommittal about it and took virtually no part in the crusade. Catholics tended to vote Democratic, and still do, not because they ever owned slaves but because of a lingering association in their minds of the Republican party with Protestant extremism. And conversely, until John F. Kennedy’s election in 1960, it was argued that a Catholic could never enter the White House because of associations in the minds of too many Protestants with popish conspiracies against religious and political freedom.
The fact that Catholics mostly sat on their hands during the long controversy about slavery, which was primarily a religious one, brings us to the second precondition needed to make the American politico-religious system work.
There was no difficulty about the first precondition—the high level of religiosity. But the second precondition was a level of agreement on certain basic moral and ethical notions as interpreted in public institutions. It was here that the system broke down, for American Christianity could not agree about slavery.
The dilemma had been there right from the start, since 1619 marked the beginning both of representative government and of slavery. But it had slowly become more acute since the identification of American moral Christianity with democracy made slavery come to seem both an offense against God and an offense against the nation.
On the other hand, were not Southern slave owners Christians, too? Indeed they were. There had been a strong antislavery movement among the churches in the South, particularly the Baptists and Quakers, in the 1770’s. It had petered out because the churches came to terms with Southern practice. But this did not, and could not, remove religion from the slavery question. The doctrinal position might be arguable, but the moral position—which was what mattered—became increasingly clear to the majority of American Christians.
The Civil War can be described as the most characteristic religious episode in the whole of American history since its roots and causes were not political and economic so much as religious and moral. It was a case of a moral principle tested to destruction, not of the principle but of those who opposed it. And in the process Christianity itself was placed under almost intolerable strain.
The movement which finally destroyed American slavery was religious in a number of different senses. It reflected a degree of extremism in the Northern Christian sects. William Lloyd Garrison, a Baptist converted to activism by Quakers, who founded the Boston Public Liberator, wrote in its first issue: “I will be as harsh as truth and as uncompromising as justice. On this subject I do not wish to think, to speak, or write, with moderation.” Extremists on this issue had many links with revivalism, which gave it a nationwide platform and constituency.
Then, too, there was the theology of abolition which, as might be expected, was primarily a moral theology. In 1845 Edward Beecher published a series of articles on what he termed the nation’s “organic sin” of slavery, which invested the abolitionist cause with a whole series of evangelical insights. Again, Uncle Tom’s Cabin itself had a background in religion and especially moral theology: it was an improving tract as well as a piece of political propaganda.
There was little internal opposition to slavery among white Southern Christians, and a notable closing of ranks after the black preacher Nat Turner led the Virginia slave revolt of 1831, in which 57 whites were killed. Revivalism, which in the North strengthened the cause of abolition, was put to exactly the opposite use in the South, where it was, if anything, even more powerful. The South Carolina Baptist Association produced a biblical defense of slavery in 1822, and in 1844 John England, Bishop of Charleston, provided a similar one for Southern white Catholics. There were standard biblical texts on alleged Negro inferiority, patriarchal and Mosaic acceptance of servitude, and St. Paul on obedience to masters. Both North and South could and did hurl texts at each other.
Having split, the churches promptly went to battle on opposing sides when the war actually came. Leonidas Polk, Bishop of Louisiana, entered the Confederate army as a major-general and announced: “It is for constitutional liberty, which seems to us to have fled for refuge, for our hearthstones and our altars that we strike.” Thomas March, Bishop of Rhode Island, preached to the militia on the other side: “It is a holy and righteous cause in which you enlist. . . . God is with us . . . the Lord of Hosts is on our side.”
To judge by the hundreds of sermons and specially-composed church prayers which have survived on both sides, ministers were among the most fanatical of the combatants from beginning to end. The churches played a major role in dividing the nation, and it is probably true that it was the splits in the churches which made a final split in the nation inevitable.
In the North, such a charge was often willingly accepted. Granville Moddy, a Northern Methodist, boasted in 1861: “We are charged with having brought about the present contest. I believe it is true we did bring it about, and I glory in it, for it is a wreath of glory round our brow.”
Southern clergymen did not make the same boast, but it is true that of all the various elements in the South they did the most to make a secessionist state of mind possible. Southern clergymen were also particularly responsible for prolonging the increasingly futile struggle. Both sides claimed vast numbers of “conversions” among their troops and a tremendous increase in churchgoing and prayerfulness as a result of the fighting.
The clerical interpretation of the war’s progress was equally dogmatic and contradictory. The Southern Presbyterian theologian Robert Lewis Dabney blamed what he termed “the calculated malice” of the Northern Presbyterians and called on God for a “retributive providence” which would demolish the North. Henry Ward Beecher predicted that the Southern leaders would be “whirled aloft and plunged downward for ever and ever in an endless retribution.” The New Haven theologian Theodore Thornton Munger declared that the Confederacy had been “in league with Hell,” and the South was now “suffering for its sins” as a matter of “divine logic.” He worked out that General McClellan’s much-criticized vacillations were an example of God’s masterful cunning since they made a quick Northern victory impossible and so ensured that the South would be much more heavily punished in the end.
By contrast, there were the doubts, the puzzlings, and the agonizing efforts of Abraham Lincoln to rationalize God’s purpose. His evident and total sincerity shines through his speeches and letters and private musings as the war took its terrible toll. Although he was theoretically a Baptist, we have his wife’s word for it that he never truly belonged to any church. It is not clear to this day whether he believed in a personal God in the traditional sense. Yet he declared himself “satisfied that when the Almighty wants me to do or not to do a particular thing, He finds a way of letting me know it.” He thus waited, as the cabinet papers show, for providential guidance at certain critical points of the war. He never claimed to be the personal agent of God’s will, as everyone else seemed to be doing. But he wrote:
If it were not for my firm belief in an overruling providence it would be difficult for me, in the midst of such complications of affairs, to keep my reason in its seat. But I am confident that the Almighty has His plans and will work them out; and . . . they will be the wisest and the best for us.
When asked if God was on the side of the North, he replied: “I am not at all concerned about that, for I know the Lord is always on the side of the right. But it is my constant anxiety and prayer that I and this nation should be on the Lord’s side.” Hence his determination throughout to do the moral thing: “I am not bound to win but I am bound to be true. I am not bound to succeed, but I am bound to live up to the light I have.”
In thus arguing within himself, Lincoln, it seems to me, incarnated and embodied the national, republican, and democratic morality which the American religious experience had brought into existence. He caught exactly the same mood as President Washington in his farewell message to Congress, which I quoted above, and that is one reason why his conduct of the events leading up to war, and the war itself, have seemed so unerringly in accord with the national spirit, so just, so American.
Unlike Governor Winthrop and the first colonists, Lincoln did not see the republic as the Elect Nation because that implied it was always right, and the fact that the Civil War had occurred at all indicated that America was fallible. But, if fallible, it was also anxious to do right. America, as he described it, was “the almost-chosen people,” and the war was part of God’s scheme, a great testing of the nation by an ordeal of blood, showing the way to charity and thus to rebirth.
The majority of Northern Christians took a more triumphalist view of events. They came to look upon the Civil War not as a Christian defeat, in which the powerlessness and contradictions of the faith had been exposed, but as an American-Christian victory, in which Christian egalitarian teaching had been triumphantly vindicated against renegades and apostates.
Such a view fit neatly into a world vision of the Anglo-Saxon races raising up the benighted and ignorant dark millions, and bringing them, thanks to a “favoring providence,” into the lighted circle of Christian truth; thus the universalist mission of Christ would be triumphantly completed. The Civil War was the prelude to an enormous American missionary effort throughout what we now call the third world, followed in due course by actual military intervention in some places.
This agenda still had a Protestant moral coloring to it. The thinking behind it was conveyed in Leonard Woolsey Bacon’s History of American Christianity, published in 1897 at the height of the era of imperialism:
By a prodigy of divine providence the secret of the ages [that a new world lay beyond the sea] had been kept from premature disclosure. . . . If the discovery of America had been achieved . . . even a single century earlier, the Christianity to be transplanted to the Western world would have been that of the Church of Europe at its lowest stage of decadence [before being purged by the Reformation].
So he saw “great providential preparations as for some ‘divine event’ still hidden behind the curtain that is about to rise on our new century.”
No wonder, then, that in the McKinley-Theodore Roosevelt era, the Protestant churches were often vociferous supporters of American expansion, especially at the expense of the crumbling remains of the Spanish empire, which they saw as a God-determined process by which “Romish superstition” was being replaced by “Christian civilization.” President McKinley justified the American occupation of the Philippines in Christian evangelical terms:
I am not ashamed to tell you, gentlemen, that I went down on my knees and prayed to Almighty God for light and guidance that one night. And one night late it came to me this way. . . . There was nothing left for us to do but to take them all and educate the Filipinos and uplift and civilize and Christianize them, and by God’s grace do the very best we could by them, as our fellow men for whom Christ also died.
However, by then this kind of Christian, more specifically Protestant, triumphalism was already beginning to ebb in Europe. There, the 1880’s saw Christianity, in terms of church attendance, at its maximum extent per head of the population. Thereafter there was slow but progressive decline for most churches.
It was a different matter in the United States. Church attendance continued to increase throughout the first half of the 20th century. But these decades also demonstrated the limits of specifically Protestant power. Protestant churches campaigned against prizefighting as a moral evil, and succeeded in banning the Jack Johnson-Jim Jeffries fight in California. But the campaign as a whole failed.
The crusade against alcohol was even more ambitious. If the churches could overcome Negro slavery, it was argued, why would not the slavery of alcohol be a possible target? By 1900, thanks to largely Protestant pressure groups, 24 percent of the population lived in dry territory. By 1906, this had been extended to 40 percent. By 1917, there were 29 dry states and over half the American people lived on dry territory. In 1920, total prohibition became a fact.
But what looked at first like the greatest victory for American evangelicalism turned instead into its greatest defeat. The legislation was undiscriminating and too comprehensive. It bore the marks of an unreasoning religious fanaticism and it ignored much sympathetic and wise advice. It not merely excluded but alienated major religious groups such as the Catholics, many of whom, perhaps perversely, saw Prohibition as an attack on their religion. Hence the movement failed to make Prohibition stick, and it was not merely defeated but routed. This was a disaster for organized American Protestantism. It was accompanied and followed by a rapid decline in its domestic political power.
Traditional Protestant moral theology seemed to have no answer for the Depression. It regarded the New Deal and similar interventionist schemes as unscriptural and sinful. Except in the South, most Protestant ministers and periodicals favored the Republicans and opposed Franklin D. Roosevelt. In 1936, over 70 percent of 21,606 Protestant ministers polled voted for Roosevelt’s Republican opponent, Alf Landon, who also got the votes of a large majority of all Protestant church members. He lost by a landslide nonetheless. Thus the middle decades of the 20th century marked a Protestant political retreat before a Democratic coalition in which Jews and Catholics and secular progressives all had increasing roles to play.
At the same time, however, those attending church regularly continued to increase. In 1910, the proportion of the population affiliated with all churches was calculated at 43 percent. It was the same figure in 1920. By 1940 it had risen to 49 percent, and this was followed by an impressive postwar “revival” lifting the percentage to 55 in 1950 and a remarkable 69 percent in 1960.
Even so, the detachment of American popular religion from its doctrinal basis continued. Ordinary churchgoers, for instance, showed themselves less and less inclined to read the New Testament. Religion seemed to be less and less about suffering and repentance and more and more about happiness. As long ago as the 1830’s Tocqueville had complained of American preachers: “It is often difficult to ascertain from their discourse whether the principal object of religion is to procure eternal felicity in the other world or prosperity in this.” Now, more than a century later, religion and churchgoing served almost as a national talisman to ensure that economic expansion would continue into the 1950’s and 1960’s—an insurance policy against the end of affluence.
This period was also marked by the adoption of psychological concepts to induce tranquility and felicity, seen by some critics as a debased modern form of mysticism. Americans in vast numbers read Bishop Fulton J. Sheen’s Peace of Soul (1949), Norman Vincent Peale’s Guide to Confident Living (1948) and The Power of Positive Thinking (1952), and Billy Graham’s Peace with God (1953).
These were variations on harmonial and gnostic themes which had long flourished in the United States, producing such phenomena as Christian Science, Theosophy, and American Rosicrucianism. They were thus Christian. But many cults, like Theosophy and the system pushed by Rudolf Steiner, had little common dogmatic ground with Christianity. And the religious spectrum shaded off into domestic revivals of other imperial religions such as Indian Vedanta, Persian Bahai, Zen Buddhism, and—especially among blacks—forms of Islam. Even in President Eisenhower’s Washington, which symbolized the Christian revival of the mid-century, and where the tone was superficially ecumenical Protestant, the actual content was patriotic moralism and sentimentalized religiosity rather than specifically Christian.
In 1954 the phrase “under God,” which had been used by Lincoln in his Gettysburg address, was added to the United States pledge of allegiance. In 1956 the device from the coinage, “In God We Trust,” became the nation’s official motto. But the nature of God was left undefined. President Eisenhower, himself the archetype of the generalized homo Americanus religiosus, asked the nation only for “faith in faith.” He told the country in 1954: “Our government makes no sense unless it is founded on a deeply-felt religious faith—and I don’t care what it is.”
Well, of course he did care, really. What he understood by faith was something compatible with the Protestant morality, vaguely based on Scripture, with which he was familiar. He could not define it any more than his predecessors, Washington and Lincoln, had been able to define the heavenly providence which they relied upon to see America right. But Eisenhower’s trusting vagueness—or, as some would say, vacuity—was no longer enough, because in the second half of the 20th century America’s religious, republican, and democratic consensus began to dissolve into new forms of civil warfare.
3. Sodom, Gomorrah, and Middletown
Beginning in the 1960’s, and proceeding with increased emphasis since then, something strange has been occurring to the place of religion in American life. It is not so much that the number of Americans affiliated with particular churches has declined—though it has: the figure of 69 percent in 1960 has never been surpassed and is now appreciably lower. On the other hand, it is generally accepted that more than half the American people still attend a place of worship once a week, an index of religious practice unequaled anywhere in the world, certainly in a great and populous nation.
The difference, rather, lies in the status of religious belief in American life. Until the second half of the 20th century, religion, as we have seen, was held by virtually all Americans, irrespective of their beliefs or nonbelief, to be not only a desirable but an essential part of the national fabric. Therefore those who preached from the pulpit were acknowledged to be among the most valuable citizens of the country. As Tocqueville observed, there was no such thing as anticlericalism in America. Whereas in Europe religious practice and fervor were often, even habitually, seen as a threat to freedom, in America they were seen as its underpinning. In Europe, religion was presented, at any rate by the majority of intellectuals, as an obstacle to “progress”; in America, as one of its dynamics.
This huge and important difference between European and American attitudes is now becoming blurred, and is perhaps in the process of disappearing altogether. In Europe, the anticlericalism so marked in the first half of this century has declined sharply. In the United States it has come into existence, and is rising.
Indeed, for the first time in American history there is a widespread tendency, especially among educated people, and above all among intellectuals, to present the clergy as enemies of freedom and democratic choice. The suspicion and in some cases hatred focuses particularly on two types of clergy: the Roman Catholic hierarchy and the popular evangelicals. But it is directed against clergy from any religious group, including Orthodox Jewish and Islamic ones, who proclaim their religious beliefs, and their moral notions, in a nonapologetic and forthright manner.
Secondly, there is a further tendency among the same people to present religious belief of any kind which is held with certitude, and religious practice of any kind which is conducted with zeal, as “fundamentalist”—a term of universal abuse.
An adjectival ratchet-effect has been at work here. The usual, normal, habitual, and customary beliefs of many Christians (and Jews) have first been verbally isolated as “traditionalist,” then as “orthodox,” next as “ultra-orthodox,” and finally as “fundamentalist,” though they have remained the same beliefs all the time. This hostile adjectival inflation marks the changed perspective of many Americans that religious beliefs as such, especially insofar as they underpin moral certitudes, constitute a threat to freedom.
It is exactly the same attitude, part rational, part irrational, which in Europe underlay the old anticlericalism, and its progeny, militant secularism and atheism. Its appearance in America is new, and potentially very dangerous. For it is a divisive force, a challenge to the moral and religious consensus which was such an important part of American republican and democratic unity—and strength.
The new American anticlericalism has two notable features, both characteristically American in themselves. The first is fear, accompanied by a conspiracy theory. The tendency to see huge and malevolent enemies, threatening all that Americans hold dear, who turn out to be creatures largely or even entirely of the imagination, is a recurrent feature of American history, and of “the paranoid style in American politics,” as Richard Hofstadter has called it.
America’s new anticlericals and antifundamentalists are very much in this tradition. They present the Vatican of Pope John Paul II and the fundamentalism of the Protestant “moral majority” as modern and almost revolutionary conspiracies of religious zealots to undermine the traditional right of Americans to enjoy divorce on demand, easy abortion, free love, homosexuality, and a life of unqualified hedonism. The fact that there is not the smallest element of novelty in the Vatican’s teaching, or that the great majority of American religious believers have always been so-called fundamentalists, or that the “rights” threatened by the “conspiracy” were themselves unimaginable two generations ago—all this is brushed aside in the prevailing paranoia.
The second feature of the new hostility to religion in America is its very insistence on these human rights. This, too, is in the American tradition, though with an important qualification.
American independence was from the start based upon the assertion of rights: the Declaration of Independence, the Constitution, and the Bill of Rights speak for themselves. One of the principal objects of the Congress, and still more of the Supreme Court, throughout American history has been to defend, uphold, extend, reinterpret, and embellish the rights of all Americans. Superficially at least, the great engine of American law has been fueled and driven by a rights-based philosophy.
However, underlying this political philosophy of rights—balancing, correcting, and making it workable in practice—has been an even deeper-rooted and more pervasive religious philosophy of duties. The distinction is all-important. The notion of rights is essentially political and secular. The securing of individual rights compatible with social cohesion is the whole art of politics. And the quest for rights is a secular activity precisely because the nonreligious approach sees the individual as an autonomous being with purely social obligations.
But equally, it is impossible for any religious philosophy to be rights-based. From a religious perspective, strictly speaking, no being has rights except God. Human beings merely have duties—to God and to each other—and the function of a church is to teach and endeavor to enforce those duties.
I personally would argue that the exact performance of duties is the only way in which valid human rights can in practice be upheld. But that is another question. What I am trying to suggest here is that, in a society whose political process is designed to secure and enlarge rights, but which is also a society driven, at a popular level, by powerful religious forces, the two are sooner or later bound to come into conflict—and that is what has now happened in the United States, on a great and increasing scale.
The historic American religious consensus, which allowed an ecumenical form of Judeo-Christian morality to be identified first with republicanism and then with democracy as well, was based upon an unspoken assumption that duties were wide-ranging and imperative. Congress and the courts could properly concentrate on enforcing rights because the churches, and the ardent men and women who composed them, could safely be left to ensure that all were aware of their duties too, and would perform them.
Once the stress on duties ceases to be sufficiently powerful, or ceases to operate at all among large sections of society, then a rights-based public philosophy tends to break down. There are more human rights, real or imaginary, than there is justice available to satisfy them. When the element of duty is subtracted from the drive for rights, the result is merely a conflict of rights.
Such conflicts of rights have always been inherent in American republicanism. The Civil War was a conflict of rights: between the collective rights of citizens organized as states and their collective rights as citizens organized in a union; between the property rights of white owners and the human rights of black slaves. Equally, one might add that until 1890, the right of Mormons to practice their religion, as guaranteed by the First Amendment, was in conflict with the right of Congress, reflecting the moral majority, to legislate against polygamous unions. This conflict was resolved at the time in favor of the majority, though similar conflicts over other issues are liable to recur as Islam becomes more potent in American life.
A more characteristically contemporary conflict of rights in America took place not long ago inside St. Patrick’s Cathedral in New York, when feminists and homosexuals united to interrupt the celebration of the Catholic mass in order to assert their right to abort their unborn children and to follow their sexual orientation.
The right of New York Catholics to practice and preach their religion without interference is guaranteed by the U.S. Constitution, as amended; the right to abort has been upheld by the Supreme Court, and the right to engage in some forms of sexual deviance has been laid down by various ordinances. So here we have two sets of rights coming into conflict, and angry and even violent conflict. Moreover, the conflict of rights is reinforced by the conflict between rights and duties, since the Catholic congregation at St. Patrick’s believes its right to hear mass celebrated is accompanied by its collective duty to follow Catholic teaching, which holds abortion to be abhorrent and homosexual practices sinful.
This kind of conflict in American society is envenomed by the fact that, at bottom, it is not a conflict between the religious and the secular impulse, but a war of religion. America is a deeply religious society even in its secularism. Its atheism, its agnosticism, above all its human-rights hedonism can be seen as a form of religious sectarianism—or, more precisely, of paganism. For when the tide of conventional belief ebbs away, the incoming surge deposits strange objects, often relics of a distant past, on the shore.
Thus, Rachel Carson, whose book The Silent Spring, published in 1963, inaugurated the popular ecological movement, inadvertently set in motion a modern form of crusade which was indeed a recrudescence of paganism. Where the animists of prehistory attributed living souls to rivers and groves, springs and mountains, so the more zealous environmentalists hold holy the tropical rain forests, the unsullied seas and lakes, the ozone layer, and all the multiple sacral phenomena of the ecosphere. The greenhouse effect substitutes for hell, the Society of Friends becomes the Friends of the Earth, the Mother of God is reincarnated as Mother Earth.
No shortage of sacred cows, either: where the ancient Egyptians venerated the ibex-god Thoth or the lion-god Bast, there are now those who campaign piously to save the whales or restore the white rhinoceros to its pristine jungle or ensure that the leopard keeps its spots. Fur coats and crocodile handbags are denounced as “unclean” or, worse, diabolical artifacts.
At various intellectual and emotional levels, many extinct forms of paganism have been revived, or survivals huffed and puffed into flame again. These cults exist in great numbers, especially in California, Arizona, New Mexico, Florida, and parts of New England. Some are merely self-regarding and self-isolated curiosities. Others are hostile to established religion and aggressively anti-Christian. They even put up signs: “Christians keep away: pagans dwell here.” There is a form of pop sound, known as death-metal music, which actually advocates violence against Christians and their institutions, and in some places has led to vandalism against churches.
But these are self-conscious attempts to reanimate extinct rites whose spirit died long ago. Far less marginal and infinitely more significant is the neopaganism of the rights movement, for here we have a new and truly modern form of animism which exercises genuine cultic appeal to millions of educated people whose scientific knowledge and rationalism are matters of pride to them.
Almost every form of contemporary ideology can be assimilated into the pantheon of the new paganism. The god of gods—the Yahweh, or Allah, or Jupiter, or Zeus, or Amon-Ra—of the pantheon is the rights deity, who omnipotently presides over all efforts, from whatever angle of the progressive spectrum, to make claims upon the law, state, and society.
There is the god of health, who stands for the right to be free from sickness, or obesity, or cancer, or death itself; the sex god, whose cult is the unrestricted pursuit of one’s sexual orientation and desires; the god of multiculturalism; the god of political correctness; the god of youth; and the god of children, who presides over a panoply of rights including the right to divorce their parents. There is the god of hedonism, whose votaries put the right to experience any or all available pleasures as the prime purpose of human existence.
And of course there is also a feminist deity, a god of the homosexuals; a goddess of the lesbians; and deities for the transvestites, the transsexuals, the handicapped, the mentally, vertically, or horizontally challenged, indeed, all those who can establish claims to special places in the hierarchy of rights.
That there should be inherent or actual conflicts of rights in such a pantheon is inevitable, and is only partly avoided by uneasy or unnatural coalitions. The feminist deity, campaigning against the commercial exploitation of women, is at war with the god of pornography, campaigning for the total abolition of censorship; the deity who presides over the rights of children is at odds with the god of the pedophiles; and there is a danger that with the expansion of medical knowledge of the unborn child in the womb, the goddess who campaigns against the abuse of living infants will be locked in deadly combat with the goddess of unrestricted abortion. The ancient Greek pantheon, which reverberated with the wars of the gods, was always in danger of toppling over into divine chaos, and a similar discord grips the neopagan rights pantheon from time to time, as when the god of multiculturalism indulges in a fit of anti-Semitism or the god of the sexual deviants locks horns with the god of genetics.
Now I do not want to carry this analogy too far and risk disbelief in its validity. But it is an extraordinary fact how closely these neopagan cults, in the guise of single-issue rights groups, resemble religious bodies, with their charismatic leaders, their creeds or agendas, their slogans or incantations or litanies, their catenae of cardinal virtues and deadly sins, their demonstrations or processions of faith, their hieratic vernaculars and rituals. They often start in the catacombs and then emerge in public to profess their faith openly: they “come out,” as Christians began to do in the 3rd century; adherents stress “black pride” or “gay pride,” as Christians once made the sign of the cross publicly to confess their faith, and as Roman Catholics still do. Militant feminists refer to themselves as “wymin” or even “sisters,” as do enclosed nuns. There are countless identifying clothes, fashions, and hairstyles, performing the same functions as Quaker collars, monkish tonsures, Greek Orthodox beards, hasidic caps and garments. Chains and lockets and earrings are sported like phylacteries or rosary beads.
Not least, these rights groups or single-issue cults produce their own scriptures or literatures. I am struck, when I visit a large campus bookstore, by the way in which these special publications are now grouped together in a distinct section, which is often—usually—larger than that devoted to traditional religious literature.
Most mainstream church hierarchies, in responding to the challenge of the new pagans, have been weak and indecisive. Almost without exception these churches have been systematically penetrated by rights groups in their guise as lobbyists and activists.
An important watershed was the Second Vatican Council, 1962-65. Hitherto, the Catholic Church had been exceptionally rigorous in repelling boarders from pressure groups, especially those with aims which could be described as “modernist”; and American Catholic bishops had been unusually ultramontane and obedient in carrying out this isolationist policy. The effect of the Council was to relax Catholic guards everywhere, but the consequences in the United States were disproportionately radical.
From being one of the most conformist, the American hierarchy rapidly became one of the least, and this pattern of behavior was adopted by many religious orders, including the once ultra-orthodox Jesuits. American Catholic universities and publications, from being citadels of doctrinal zeal, propagated the new heterodoxies and welcomed infiltration by the single-issue lobbies. And while the Catholics wavered and succumbed, the Episcopalians held out their hands even more eagerly, followed in turn by Presbyterians and Congregationalists, Methodists, Baptists, and most of the rest.
On issues such as the ordination of women, artificial contraception, abortion, the remarriage of divorced persons, homosexuality, revision of the liturgy to permit new cultic practices and music, there were, during the 1960’s and still more in the 1970’s and 1980’s, some notable accommodations with the new—or, as critics would put it, some scandalous surrenders of principle. In many cases, the lines of demarcation between the mainstream churches and the neopagan cults became blurred or even invisible.
The removal of distinctive landmarks was hastened, naturally, by the ecumenical movement within Christianity. This had been gathering speed since 1900 or thereabouts, but the really striking acceleration occurred during and after the 1960’s, when the Catholic hierarchy began to cooperate in earnest. Ecumenicalism came easier to the United States than to most Christian societies, since the stress on divisive dogmas had never been great—often nonexistent—and the American Christian moral consensus, underpinning republicanism and democracy, was an ecumenical movement in itself. Hence the official ecumenical movement endorsed by the mainstream church hierarchies proceeded smoothly.
However, unlike the American consensus itself, this development has borne few of the marks of a popular movement. It has been a largely clerical affair of bishops and pastors and moderators and ministers, moving serenely through a labyrinth of committees and statements and resolutions, generating quiet satisfaction or self-satisfaction, rather than mass enthusiasm—the hieratic not the demotic.
Nevertheless, by embodying a search for the lowest common denominators of agreement, on morals not less than in dogmatic theology, the ecumenical movement has facilitated acceptance by the mainstream churches, or at any rate by their governing bodies, of the agendas of the neopagan pressure groups and single-issue lobbies.
By contrast, and in reaction to this slide into moral confusion, there has been an unofficial but infinitely more powerful and popular—and very largely spontaneous—movement within the established churches to uphold the traditional moral consensus and to emphasize its core beliefs, springing from the Judeo-Christian tradition of the Decalogue.
This coming together of what might be called Ten Commandment Christians, joined also by large numbers of traditionalist Jews—and watched with some sympathy by Muslims and members of other more remote faiths—constitutes the alternative ecumenicalism, which is largely nonclerical, nonhierarchic, unofficial, and unstructured, led by laymen and, above all, popular. In one way or another it expresses the Moral Majority—that is, those Americans who feel that the old republican-democratic moral consensus, loosely termed the American Way of Life, is still true, valid, and central to America’s survival as a united, healthy, law-abiding, and prosperous society.
This alternative ecumenicalism, unlike the official ecumenicalism of committees and resolutions, has a genuine demotic ring to it, and is characterized by the usual marks of spiritual enthusiasm—overflowing services, mass meetings, demonstrations, activism—sometimes even violence. It is in itself a form of religious revival, another Great Awakening, and it is indeed fundamentalist in that it seeks to reassert fundamental moral truths once taken for granted and virtually unchallenged.
The emergence of this popular form of ecumenicalism, largely in response to the encroachments of neopagan hedonism, has brought about a new wave of religious warfare in the United States. Some see it as a battle between megalopolis and the rest—between supposedly godless metropolitan areas like New York, Los Angeles, and San Francisco, and the countryside of farms and of small and medium-sized towns where churchgoing is the norm and traditional moral values are respected—between, as it were, the Cities of the Plain and the City upon a Hill; between Sodom and Gomorrah and Middletown.
The new religious war can be seen, too, as a conflict between the values of the Bible and the values of the mass media. It is certainly a conflict between an approach to citizenship based on rights and one based on duties.
It is a war fought over many issues: the state schools and what is permitted within them; the use of taxpayers’ money; the interpretation of the Constitution by the courts; the enactment of congressional and state legislation on private and public morals; and the selection of candidates for public office. But there is one issue which transcends in importance all the others and which, in a determining way, sums up all the arguments and attitudes on both sides—abortion.
This, as I noted above, can be described as a conflict of rights—between the right of an unborn child to life and the right of a childbearing woman to control over her body. But it is much more than this because it also and once again involves the conflict between rights and duties, between a humanist society which puts the interests and views of men and women as the paramount guide to conduct and a religious society which acknowledges the higher law of an external deity.
There are some who see the argument, and indeed the physical, legislative, and political battle over abortion, as the greatest challenge to the unity and conscience of the American people since the Civil War itself. There are certainly many uneasy parallels. The complex of issues which brought about the Civil War—slavery, states’ rights, the survival of the union—rumbled away almost from the beginning of the union itself, occasionally breaking surface and threatening open discord, then subsiding as a compromise was patched up. But in the end it had to be resolved one way or the other and the result was the Civil War, leaving scars which, in the South at least, took a century to heal.
Abortion is a similar kind of problem, involving a complex of moral, constitutional, and social issues, which arouses great passion and obstinacy on both sides and which cannot be resolved by compromise or good will. Society can be anesthetized about the facts of abortion, just as it once could be about the facts of slavery, but the more the facts are exposed the uglier they seem, especially since scientific advances allow us to perceive what is taking place in the womb and at what an early point following conception the fetus becomes a child, indeed a person. The humanization of the fetus, like the humanization of the slave, is fatal to the case for the institution of abortion.
All this indicates that the conflict will deepen as the years go by, just as the conflict over slavery did until it was resolved by the victory of one side. The abortion argument also resembles the argument over slavery in that it transcends the issue itself and involves entire attitudes and mentalities. Around it and reinforced by it are the arguments between belief and the suspension of belief, between church and state, right and duty, freedom and obligation.
These issues have tended to divide societies since the dawn of history, but America, with its open society, with its passion for free debate, and, not least, with its lingering sense of mission to teach the world what is good and noble, provides the perfect forum for the contest to be fought to a finish.
The tragedy is that this heated, divisive issue, which is both the chief battleground and the symbol of American’s civil war of religion, comes at a time when the integrity of the United States and the unity of the American people are also threatened by other powerful forces.
As we have seen, American society was from the outset, even when it was merely a collection of fragile English colonies, an attempt to build a special kind of society, a City upon a Hill, uniquely dedicated to the godly life, whose sacral character—according to Calvinist philosophy—was reflected in its worldly prosperity. For such a society to function at all, it needed a focus of unity. This was originally provided not merely by the Protestant faith and later, as the diversity of religious worship increased, by a common moral code; in the 17th and for much of the 18th century there was another focus of unity: the English language, English law, political customs and assumptions.
Gradually, however, a growing number of immigrants arrived to whom the English language was foreign and the American version of English culture alien. From this point, the importance of the religious consensus increased, as the melting pot, that great social cauldron, which transmuted a multiracial and multiethnic human material into a single people, was constructed, heated, and stirred, and did its marvelous work.
The adoption of the English language was the ostensible sign that the melting pot worked. But in some ways it was a superficial sign, and to the historian it is clear that the principal ingredient which transformed an alien immigrant into an American citizen, and someone able, eager, and proud to pursue the American way of life, was precisely the moral ecumenicalism, the religious consensus on what constituted right conduct. Indeed, it was more than an ingredient, it was the psychological and moral framework within which the melting and transmutation could take place.
Today the obstructions to the melting machinery are formidable and growing. Forces are present which seek to prevent it from working at all, or even to reverse its workings. Many wish to keep the foreign elements in the immigrants of today in their pristine and alien state, and even to realienate those already absorbed. One of America’s leading historians, Arthur Schlesinger, Jr., has written a book about this phenomenon, entitled The Disuniting of America.
Now what is significant about this process, and I would say sinister, is that it is driven by the same machinery—the single-issue lobby—which empowers and makes formidable American neopaganism. The American political system is peculiarly susceptible to single-issue politics, and single-issue politics are peculiarly adapted to work on ethnic feelings and cultural, racial, and linguistic separateness. America’s sheer ethnic diversity makes the spread of single-issue political lobbying, conducted on an ethnic basis, peculiarly destructive of unity. The seamless garment of nationhood is in danger of being unraveled, with unpredictable consequences for America itself and the world.
And it is not just the ethnic and cultural divisions among Americans that are being stressed by this process. Americans are also being presented, in the political process, in the media, in society as a whole, as divided by sex or gender, by sexual orientation and preference, by age-grouping presented as antagonistic, by physical ability or disability, by size and shape—by any distinguishing characteristics which present opportunities for lobbying and for the manufacture of a single issue. The same forces which obstruct the melting-pot process by stressing ethnic and racial origin also stand behind one of the antagonists in America’s religious civil war.
Now there are some who argue that there is nothing sacrosanct about American national unity, that the American nation as such is not an unmitigated good, that the American way of life is by no means the most desirable or reputable way of life—that, in short, the United States of America is not the ideal political and social creation it has claimed to be. These critics dwell savagely on the sins of America in recent decades, in Vietnam and elsewhere, and they even point to the forces of disintegration in American society as in some respects salutary, as signs of repentance. They say there is too much nationalism in the world, as witness the fate of former Yugoslavia, and that American nationalism is not necessarily more righteous than any other. From this perspective, the disuniting of America is a welcome thing.
But this seems to me to turn the nationalist argument on its head. The object of the melting pot—or one of its objects anyway—was precisely to disarm the mutually competing and destructive nationalisms of the old world, based as they were on ancient linguistic, ethnic, and cultural antagonisms, and replace them by a new form of nationalism which was international, irenic, ecumenical, and benign.
The point was to transform warring peoples into one people at peace with itself, the chosen people or at least (in Lincoln’s phrase) the almost-chosen people. It was to be a new kind of nation-society, republican in its form, democratic in its politics, in its tone and social intercourse following an agreed moral code. All this was underpinned, in fact made possible at all, only by the religious consensus I have described. It was the cement of the entire sublime and adventurous construct.
I submit that the dramatic global events of the 1980’s and 1990’s, far from diminishing, have actually strengthened the case for the existence of such a construct in the world: a great and mighty nation which is something more than a nation, which is an international community in itself, a prototype global community, but which at the same time is a unity, driven by agreed assumptions, accepting a common morality and moral aims, and able therefore to marshal and deploy its forces with stunning effect.
It is impossible, looking back, to see how the world would have survived the strain of the cold war without the United States; still more difficult to see how it can survive the disturbed and unpredictable aftermath without an America which is still united behind an agreed morality and a common purpose. There is now, and for the foreseeable future, only one superpower in the world, and that is the United States. This may or may not be desirable—I believe it is—but it is certainly a fact.
As the sole superpower it is essential that the United States retain the purity of its republicanism and the efficiency of its democracy, and that it debate its aims and actions with all the thoroughness which its immense diversity of ethnic origin makes uniquely possible—but having debated, and voted, and decided, it must then act with the unity and resolution which its position of world leadership demands.
For all these reasons, the peculiar form of religious and moral consensus which has been developed in America is not an anachronism but is more urgently needed than ever.
I often wonder what Abraham Lincoln, who provided America with the leadership it needed in the greatest crisis in its history, would have felt about the nation’s task today, when it is asked to provide leadership for a distracted and dangerous world. He was himself a typical product of the religious consensus: a man who believed in providence rather than a personal, describable God, nominally a Baptist but a member of no regular church—a loose cannon on the religious deck in the eyes of the right-thinking. With it all, he was a man in whom the religious consensus had done its moral work to extraordinary effect—a man who could distinguish clearly and accurately, under the greatest stress of events, what was right and what was wrong, and who could make his decisions plain and acceptable to a vast electorate, to the point where it would expend a great quantity of blood and treasure to carry them out. In short, a man for all seasons, and an example of how the peculiar religious process of American nation-building could deliver exactly what was required.
What would this remarkable man think today? He might, I believe, still be inclined to categorize Americans as the almost-chosen people: a nation seeking the ideal but falling some way short of it. But he would also, I feel sure, consider its unity to be as much worth preserving—even fighting for—as in his own day, and he would look eagerly for those forces which could sustain and, if need be, rebuild that unity. Among those forces he would recognize, despite all his own skepticism, that by far the most important is the deep religious emotions which have always inspired, and still do inspire, the conduct of most Americans. And, forced to choose between Sodom and Gomorrah and Middletown, it would be in Middle-town that he would set up his standard.
God and the Americans
Must-Reads from Magazine
t can be said that the Book of Samuel launched the American Revolution. Though antagonistic to traditional faith, Thomas Paine understood that it was not Montesquieu, or Locke, who was inscribed on the hearts of his fellow Americans. Paine’s pamphlet Common Sense is a biblical argument against British monarchy, drawing largely on the text of Samuel.
Today, of course, universal biblical literacy no longer exists in America, and sophisticated arguments from Scripture are all too rare. It is therefore all the more distressing when public intellectuals, academics, or religious leaders engage in clumsy acts of exegesis and political argumentation by comparing characters in the Book of Samuel to modern political leaders. The most common victim of this tendency has been the central character in the Book of Samuel: King David.
Most recently, this tendency was made manifest in the writings of Dennis Prager. In a recent defense of his own praise of President Trump, Prager wrote that “as a religious Jew, I learned from the Bible that God himself chose morally compromised individuals to accomplish some greater good. Think of King David, who had a man killed in order to cover up the adultery he committed with the man’s wife.” Prager similarly argued that those who refuse to vote for a politician whose positions are correct but whose personal life is immoral “must think God was pretty flawed in voting for King David.”
Prager’s invocation of King David was presaged on the left two decades ago. The records of the Clinton Presidential Library reveal that at the height of the Lewinsky scandal, an email from Dartmouth professor Susannah Heschel made its way into the inbox of an administration policy adviser with a similar comparison: “From the perspective of Jewish history, we have to ask how Jews can condemn President Clinton’s behavior as immoral, when we exalt King David? King David had Batsheva’s husband, Uriah, murdered. While David was condemned and punished, he was never thrown off the throne of Israel. On the contrary, he is exalted in our Jewish memory as the unifier of Israel.”
One can make the case for supporting politicians who have significant moral flaws. Indeed, America’s political system is founded on an awareness of the profound tendency to sinfulness not only of its citizens but also of its statesmen. “If men were angels, no government would be necessary,” James Madison informs us in the Federalist. At the same time, anyone who compares King David to the flawed leaders of our own age reveals a profound misunderstanding of the essential nature of David’s greatness. David was not chosen by God despite his moral failings; rather, David’s failings are the lens that reveal his true greatness. It is in the wake of his sins that David emerges as the paradigmatic penitent, whose quest for atonement is utterly unlike that of any other character in the Bible, and perhaps in the history of the world.
While the precise nature of David’s sins is debated in the Talmud, there is no question that they are profound. Yet it is in comparing David to other faltering figures—in the Bible or today—that the comparison falls flat. This point is stressed by the very Jewish tradition in whose name Prager claimed to speak.
It is the rabbis who note that David’s predecessor, Saul, lost the kingship when he failed to fulfill God’s command to destroy the egregiously evil nation of Amalek, whereas David commits more severe sins and yet remains king. The answer, the rabbis suggest, lies not in the sin itself but in the response. Saul, when confronted by the prophet Samuel, offers obfuscations and defensiveness. David, meanwhile, is similarly confronted by the prophet Nathan: “Thou hast killed Uriah the Hittite with the sword, and hast taken his wife to be thy wife, and hast slain him with the sword of the children of Ammon.” David’s immediate response is clear and complete contrition: “I have sinned against the Lord.” David’s penitence, Jewish tradition suggests, sets him apart from Saul. Soon after, David gave voice to what was in his heart at the moment, and gave the world one of the most stirring of the Psalms:
Have mercy upon me, O God, according to thy lovingkindness: according unto the multitude of thy tender mercies blot out my transgressions.
Wash me thoroughly from mine iniquity, and cleanse me from my sin. For I acknowledge my transgressions: and my sin is ever before me.
. . . Deliver me from bloodguiltiness, O God, thou God of my salvation: and my tongue shall sing aloud of thy righteousness.
O Lord, open thou my lips; and my mouth shall shew forth thy praise.
For thou desirest not sacrifice; else would I give it: thou delightest not in burnt offering.
The sacrifices of God are a broken spirit: a broken and a contrite heart, O God, thou wilt not despise.
The tendency to link David to our current age lies in the fact that we know more about David than any other biblical figure. The author Thomas Cahill has noted that in a certain literary sense, David is the only biblical figure that is like us at all. Prior to the humanist autobiographies of the Renaissance, he notes, “we can count only a few isolated instances of this use of ‘I’ to mean the interior self. But David’s psalms are full of I’s.” In David’s Psalms, Cahill writes, we “find a unique early roadmap to the inner spirit—previously mute—of ancient humanity.”
At the same time, a study of the Book of Samuel and of the Psalms reveals how utterly incomparable David is to anyone alive today. Haym Soloveitchik has noted that even the most observant of Jews today fail to feel a constant intimacy with God that the simplest Jew of the premodern age might have felt, that “while there are always those whose spirituality is one apart from that of their time, nevertheless I think it safe to say that the perception of God as a daily, natural force is no longer present to a significant degree in any sector of modern Jewry, even the most religious.” Yet for David, such intimacy with the divine was central to his existence, and the Book of Samuel and the Psalms are an eternal testament to this fact. This is why simple comparisons between David and ourselves, as tempting as they are, must be resisted. David Wolpe, in his book about David, attempts to make the case as to why King David’s life speaks to us today: “So versatile and enduring is David in our culture that rare is the week that passes without some public allusion to his life…We need to understand David better because we use his life to comprehend our own.”
The truth may be the opposite. We need to understand David better because we can use his life to comprehend what we are missing, and how utterly unlike our lives are to his own. For even the most religious among us have lost the profound faith and intimacy with God that David had. It is therefore incorrect to assume that because of David’s flaws it would have been, as Amos Oz has written, “fitting for him to reign in Tel Aviv.” The modern State of Israel was blessed with brilliant leaders, but to which of its modern warriors or statesmen should David be compared? To Ben Gurion, who stripped any explicit invocation of the Divine from Israel’s Declaration of Independence? To Moshe Dayan, who oversaw the reconquest of Jerusalem, and then immediately handed back the Temple Mount, the locus of King David’s dreams and desires, to the administration of the enemies of Israel? David’s complex humanity inspires comparison to modern figures, but his faith, contrition, and repentance—which lie at the heart of his story and success—defy any such engagement.
And so, to those who seek comparisons to modern leaders from the Bible, the best rule may be: Leave King David out of it.
Three attacks in Britain highlight the West’s inability to see the threat clearly
This lack of seriousness manifests itself in several ways. It’s perhaps most obvious in the failure to reform Britain’s chaotic immigration and dysfunctional asylum systems. But it’s also abundantly clear from the grotesque underfunding and under-resourcing of domestic intelligence. In MI5, Britain has an internal security service that is simply too small to do its job effectively, even if it were not handicapped by an institutional culture that can seem willfully blind to the ideological roots of the current terrorism problem.
In 2009, Jonathan Evans, then head of MI5, confessed at a parliamentary hearing about the London bus and subway attacks of 2005 that his organization only had sufficient resources to “hit the crocodiles close to the boat.” It was an extraordinary metaphor to use, not least because of the impression of relative impotence that it conveys. MI5 had by then doubled in size since 2001, but it still boasted a staff of only 3,500. Today it’s said to employ between 4,000 and 5,000, an astonishingly, even laughably, small number given a UK population of 65 million and the scale of the security challenges Britain now faces. (To be fair, the major British police forces all have intelligence units devoted to terrorism, and the UK government’s overall counterterrorism strategy involves a great many people, including social workers and schoolteachers.)
You can also see that unseriousness at work in the abject failure to coerce Britain’s often remarkably sedentary police officers out of their cars and stations and back onto the streets. Most of Britain’s big-city police forces have adopted a reactive model of policing (consciously rejecting both the New York Compstat model and British “bobby on the beat” traditions) that cripples intelligence-gathering and frustrates good community relations.
If that weren’t bad enough, Britain’s judiciary is led by jurists who came of age in the 1960s, and who have been inclined since 2001 to treat terrorism as an ordinary criminal problem being exploited by malign officials and politicians to make assaults on individual rights and to take part in “illegal” foreign wars. It has long been almost impossible to extradite ISIS or al-Qaeda–linked Islamists from the UK. This is partly because today’s English judges believe that few if any foreign countries—apart from perhaps Sweden and Norway—are likely to give terrorist suspects a fair trial, or able to guarantee that such suspects will be spared torture and abuse.
We have a progressive metropolitan media elite whose primary, reflexive response to every terrorist attack, even before the blood on the pavement is dry, is to express worry about an imminent violent anti-Muslim “backlash” on the part of a presumptively bigoted and ignorant indigenous working class. Never mind that no such “backlash” has yet occurred, not even when the young off-duty soldier Lee Rigby was hacked to death in broad daylight on a South London street in 2013.
Another sign of this lack of seriousness is the choice by successive British governments to deal with the problem of internal terrorism with marketing and “branding.” You can see this in the catchy consultant-created acronyms and pseudo-strategies that are deployed in place of considered thought and action. After every atrocity, the prime minister calls a meeting of the COBRA unit—an acronym that merely stands for Cabinet Office Briefing Room A but sounds like a secret organization of government superheroes. The government’s counterterrorism strategy is called CONTEST, which has four “work streams”: “Prevent,” “Pursue,” “Protect,” and “Prepare.”
Perhaps the ultimate sign of unseriousness is the fact that police, politicians, and government officials have all displayed more fear of being seen as “Islamophobic” than of any carnage that actual terror attacks might cause. Few are aware that this short-term, cowardly, and trivial tendency may ultimately foment genuine, dangerous popular Islamophobia, especially if attacks continue.R
ecently, three murderous Islamist terror attacks in the UK took place in less than a month. The first and third were relatively primitive improvised attacks using vehicles and/or knives. The second was a suicide bombing that probably required relatively sophisticated planning, technological know-how, and the assistance of a terrorist infrastructure. As they were the first such attacks in the UK, the vehicle and knife killings came as a particular shock to the British press, public, and political class, despite the fact that non-explosive and non-firearm terror attacks have become common in Europe and are almost routine in Israel.
The success of all three plots indicates troubling problems in British law-enforcement practice and culture, quite apart from any other failings on the parts of the state in charge of intelligence, border control, and the prevention of radicalization. At the time of writing, the British media have been full of encomia to police courage and skill, not least because it took “only” eight minutes for an armed Metropolitan Police team to respond to and confront the bloody mayhem being wrought by the three Islamist terrorists (who had ploughed their rented van into people on London Bridge before jumping out to attack passersby with knives). But the difficult truth is that all three attacks would be much harder to pull off in Manhattan, not just because all NYPD cops are armed, but also because there are always police officers visibly on patrol at the New York equivalents of London’s Borough Market on a Saturday night. By contrast, London’s Metropolitan police is a largely vehicle-borne, reactive force; rather than use a physical presence to deter crime and terrorism, it chooses to monitor closed-circuit street cameras and social-media postings.
Since the attacks in London and Manchester, we have learned that several of the perpetrators were “known” to the police and security agencies that are tasked with monitoring potential terror threats. That these individuals were nevertheless able to carry out their atrocities is evidence that the monitoring regime is insufficient.
It also seems clear that there were failures on the part of those institutions that come under the leadership of the Home Office and are supposed to be in charge of the UK’s border, migration, and asylum systems. Journalists and think tanks like Policy Exchange and Migration Watch have for years pointed out that these systems are “unfit for purpose,” but successive governments have done little to take responsible control of Britain’s borders. When she was home secretary, Prime Minister Theresa May did little more than jazz up the name, logo, and uniforms of what is now called the “Border Force,” and she notably failed to put in place long-promised passport checks for people flying out of the country. This dereliction means that it is impossible for the British authorities to know who has overstayed a visa or whether individuals who have been denied asylum have actually left the country.
It seems astonishing that Youssef Zaghba, one of the three London Bridge attackers, was allowed back into the country. The Moroccan-born Italian citizen (his mother is Italian) had been arrested by Italian police in Bologna, apparently on his way to Syria via Istanbul to join ISIS. When questioned by the Italians about the ISIS decapitation videos on his mobile phone, he declared that he was “going to be a terrorist.” The Italians lacked sufficient evidence to charge him with a crime but put him under 24-hour surveillance, and when he traveled to London, they passed on information about him to MI5. Nevertheless, he was not stopped or questioned on arrival and had not become one of the 3,000 official terrorism “subjects of interest” for MI5 or the police when he carried out his attack. One reason Zaghba was not questioned on arrival may have been that he used one of the new self-service passport machines installed in UK airports in place of human staff after May’s cuts to the border force. Apparently, the machines are not yet linked to any government watch lists, thanks to the general chaos and ineptitude of the Home Office’s efforts to use information technology.
The presence in the country of Zaghba’s accomplice Rachid Redouane is also an indictment of the incompetence and disorganization of the UK’s border and migration authorities. He had been refused asylum in 2009, but as is so often the case, Britain’s Home Office never got around to removing him. Three years later, he married a British woman and was therefore able to stay in the UK.
But it is the failure of the authorities to monitor ringleader Khuram Butt that is the most baffling. He was a known and open associate of Anjem Choudary, Britain’s most notorious terrorist supporter, ideologue, and recruiter (he was finally imprisoned in 2016 after 15 years of campaigning on behalf of al-Qaeda and ISIS). Butt even appeared in a 2016 TV documentary about ISIS supporters called The Jihadist Next Door. In the same year, he assaulted a moderate imam at a public festival, after calling him a “murtad” or apostate. The imam reported the incident to the police—who took six months to track him down and then let him off with a caution. It is not clear if Butt was one of the 3,000 “subjects of interest” or the additional 20,000 former subjects of interest who continue to be the subject of limited monitoring. If he was not, it raises the question of what a person has to do to get British security services to take him seriously as a terrorist threat; if he was in fact on the list of “subjects of interest,” one has to wonder if being so designated is any barrier at all to carrying out terrorist atrocities. It’s worth remembering, as few do here in the UK, that terrorists who carried out previous attacks were also known to the police and security services and nevertheless enjoyed sufficient liberty to go at it again.B
ut the most important reason for the British state’s ineffectiveness in monitoring terror threats, which May addressed immediately after the London Bridge attack, is a deeply rooted institutional refusal to deal with or accept the key role played by Islamist ideology. For more than 15 years, the security services and police have chosen to take note only of people and bodies that explicitly espouse terrorist violence or have contacts with known terrorist groups. The fact that a person, school, imam, or mosque endorses the establishment of a caliphate, the stoning of adulterers, or the murder of apostates has not been considered a reason to monitor them.
This seems to be why Salman Abedi, the Manchester Arena suicide bomber, was not being watched by the authorities as a terror risk, even though he had punched a girl in the face for wearing a short skirt while at university, had attended the Muslim Brotherhood-controlled Didsbury Mosque, was the son of a Libyan man whose militia is banned in the UK, had himself fought against the Qaddafi regime in Libya, had adopted the Islamist clothing style (trousers worn above the ankle, beard but no moustache), was part of a druggy gang subculture that often feeds individuals into Islamist terrorism, and had been banned from a mosque after confronting an imam who had criticized ISIS.
It was telling that the day after the Manchester Arena suicide-bomb attack, you could hear security officials informing radio and TV audiences of the BBC’s flagship morning-radio news show that it’s almost impossible to predict and stop such attacks because the perpetrators “don’t care who they kill.” They just want to kill as many people as possible, he said.
Surely, anyone with even a basic familiarity with Islamist terror attacks over the last 15 or so years and a nodding acquaintance with Islamist ideology could see that the terrorist hadn’t just chosen the Ariana Grande concert in Manchester Arena because a lot of random people would be crowded into a conveniently small area. Since the Bali bombings of 2002, nightclubs, discotheques, and pop concerts attended by shameless unveiled women and girls have been routinely targeted by fundamentalist terrorists, including in Britain. Among the worrying things about the opinion offered on the radio show was that it suggests that even in the wake of the horrific Bataclan attack in Paris during a November 2015 concert, British authorities may not have been keeping an appropriately protective eye on music venues and other places where our young people hang out in their decadent Western way. Such dereliction would make perfect sense given the resistance on the part of the British security establishment to examining, confronting, or extrapolating from Islamist ideology.
The same phenomenon may explain why authorities did not follow up on community complaints about Abedi. All too often when people living in Britain’s many and diverse Muslim communities want to report suspicious behavior, they have to do so through offices and organizations set up and paid for by the authorities as part of the overall “Prevent” strategy. Although criticized by the left as “Islamophobic” and inherently stigmatizing, Prevent has often brought the government into cooperative relationships with organizations even further to the Islamic right than the Muslim Brotherhood. This means that if you are a relatively secular Libyan émigré who wants to report an Abedi and you go to your local police station, you are likely to find yourself speaking to a bearded Islamist.
From its outset in 2003, the Prevent strategy was flawed. Its practitioners, in their zeal to find and fund key allies in “the Muslim community” (as if there were just one), routinely made alliances with self-appointed community leaders who represented the most extreme and intolerant tendencies in British Islam. Both the Home Office and MI5 seemed to believe that only radical Muslims were “authentic” and would therefore be able to influence young potential terrorists. Moderate, modern, liberal Muslims who are arguably more representative of British Islam as a whole (not to mention sundry Shiites, Sufis, Ahmmadis, and Ismailis) have too often found it hard to get a hearing.
Sunni organizations that openly supported suicide-bomb attacks in Israel and India and that justified attacks on British troops in Iraq and Afghanistan nevertheless received government subsidies as part of Prevent. The hope was that in return, they would alert the authorities if they knew of individuals planning attacks in the UK itself.
It was a gamble reminiscent of British colonial practice in India’s northwest frontier and elsewhere. Not only were there financial inducements in return for grudging cooperation; the British state offered other, symbolically powerful concessions. These included turning a blind eye to certain crimes and antisocial practices such as female genital mutilation (there have been no successful prosecutions relating to the practice, though thousands of cases are reported every year), forced marriage, child marriage, polygamy, the mass removal of girls from school soon after they reach puberty, and the epidemic of racially and religiously motivated “grooming” rapes in cities like Rotherham. (At the same time, foreign jihadists—including men wanted for crimes in Algeria and France—were allowed to remain in the UK as long as their plots did not include British targets.)
This approach, simultaneously cynical and naive, was never as successful as its proponents hoped. Again and again, Muslim chaplains who were approved to work in prisons and other institutions have sometimes turned out to be Islamist extremists whose words have inspired inmates to join terrorist organizations.
Much to his credit, former Prime Minister David Cameron fought hard to change this approach, even though it meant difficult confrontations with his home secretary (Theresa May), as well as police and the intelligence agencies. However, Cameron’s efforts had little effect on the permanent personnel carrying out the Prevent strategy, and cooperation with Islamist but currently nonviolent organizations remains the default setting within the institutions on which the United Kingdom depends for security.
The failure to understand the role of ideology is one of imagination as well as education. Very few of those who make government policy or write about home-grown terrorism seem able to escape the limitations of what used to be called “bourgeois” experience. They assume that anyone willing to become an Islamist terrorist must perforce be materially deprived, or traumatized by the experience of prejudice, or provoked to murderous fury by oppression abroad. They have no sense of the emotional and psychic benefits of joining a secret terror outfit: the excitement and glamor of becoming a kind of Islamic James Bond, bravely defying the forces of an entire modern state. They don’t get how satisfying or empowering the vengeful misogyny of ISIS-style fundamentalism might seem for geeky, frustrated young men. Nor can they appreciate the appeal to the adolescent mind of apocalyptic fantasies of power and sacrifice (mainstream British society does not have much room for warrior dreams, given that its tone is set by liberal pacifists). Finally, they have no sense of why the discipline and self-discipline of fundamentalist Islam might appeal so strongly to incarcerated lumpen youth who have never experienced boundaries or real belonging. Their understanding is an understanding only of themselves, not of the people who want to kill them.
Review of 'White Working Class' By Joan C. Williams
Williams is a prominent feminist legal scholar with degrees from Yale, MIT, and Harvard. Unbending Gender, her best-known book, is the sort of tract you’d expect to find at an intersectionality conference or a Portlandia bookstore. This is why her insightful, empathic book comes as such a surprise.
Books and essays on the topic have accumulated into a highly visible genre since Donald Trump came on the American political scene; J.D. Vance’s Hillbilly Elegy planted itself at the top of bestseller lists almost a year ago and still isn’t budging. As with Vance, Williams’s interest in the topic is personal. She fell “madly in love with” and eventually married a Harvard Law School graduate who had grown up in an Italian neighborhood in pre-gentrification Brook-lyn. Williams, on the other hand, is a “silver-spoon girl.” Her father’s family was moneyed, and her maternal grandfather was a prominent Reform rabbi.
The author’s affection for her “class-migrant” spouse and respect for his family’s hardships—“My father-in-law grew up on blood soup,” she announces in her opening sentence—adds considerable warmth to what is at bottom a political pamphlet. Williams believes that elite condescension and “cluelessness” played a big role in Trump’s unexpected and dreaded victory. Enlightening her fellow elites is essential to the task of returning Trump voters to the progressive fold where, she is sure, they rightfully belong.
Liberals were not always so dense about the working class, Williams observes. WPA murals and movies like On the Waterfront showed genuine fellow feeling for the proletariat. In the 1970s, however, the liberal mood changed. Educated boomers shifted their attention to “issues of peace, equal rights, and environmentalism.” Instead of feeling the pain of Arthur Miller and John Steinbeck characters, they began sneering at the less enlightened. These days, she notes, elite sympathies are limited to the poor, people of color (POC), and the LGBTQ population. Despite clear evidence of suffering—stagnant wages, disappearing manufacturing jobs, declining health and well-being—the working class gets only fly-over snobbery at best and, more often, outright loathing.
Williams divides her chapters into a series of explainers to questions she has heard from her clueless friends and colleagues: “Why Does the Working Class Resent the Poor?” “Why Does the Working Class Resent Professionals but Admire the Rich?” “Why Doesn’t the Working Class Just Move to Where the Jobs Are?” “Is the Working Class Just Racist?” She weaves her answers into a compelling picture of a way of life and worldview foreign to her targeted readers. Working-class Americans have had to struggle for whatever stability and comfort they have, she explains. Clocking in for midnight shifts year after year, enduring capricious bosses, plant closures, and layoffs, they’re reliant on tag-team parenting and stressed-out relatives for child care. The campus go-to word “privileged” seems exactly wrong.
Proud of their own self-sufficiency and success, however modest, they don’t begrudge the self-made rich. It’s snooty professionals and the dysfunctional poor who get their goat. From their vantage point, subsidizing the day care for a welfare mother when they themselves struggle to manage care on their own dime mocks both their hard work and their beliefs. And since, unlike most professors, they shop in the same stores as the dependent poor, they’ve seen that some of them game the system. Of course that stings.
White Working Class is especially good at evoking the alternate economic and mental universe experienced by Professional and Managerial Elites, or “PMEs.” PMEs see their non-judgment of the poor, especially those who are “POC,” as a mark of their mature understanding that we live in an unjust, racist system whose victims require compassion regardless of whether they have committed any crime. At any rate, their passions lie elsewhere. They define themselves through their jobs and professional achievements, hence their obsession with glass ceilings.
Williams tells the story of her husband’s faux pas at a high-school reunion. Forgetting his roots for a moment, the Ivy League–educated lawyer asked one of his Brooklyn classmates a question that is the go-to opener in elite social settings: “What do you do?” Angered by what must have seemed like deliberate humiliation by this prodigal son, the man hissed: “I sell toilets.”
Instead of stability and backyard barbecues with family and long-time neighbors and maybe the occasional Olive Garden celebration, PMEs are enamored of novelty: new foods, new restaurants, new friends, new experiences. The working class chooses to spend its leisure in comfortable familiarity; for the elite, social life is a lot like networking. Members of the professional class may view themselves as sophisticated or cosmopolitan, but, Williams shows, to the blue-collar worker their glad-handing is closer to phony social climbing and their abstract, knowledge-economy jobs more like self-important pencil-pushing.
White Working Class has a number of proposals for creating the progressive future Williams would like to see. She wants to get rid of college-for-all dogma and improve training for middle-skill jobs. She envisions a working-class coalition of all races and ethnicities bolstered by civics education with a “distinctly celebratory view of American institutions.” In a saner political environment, some of this would make sense; indeed, she echoes some of Marco Rubio’s 2016 campaign themes. It’s little wonder White Working Class has already gotten the stink eye from liberal reviewers for its purported sympathies for racists.
Alas, impressive as Williams’s insights are, they do not always allow her to transcend her own class loyalties. Unsurprisingly, her own PME biases mostly come to light in her chapters on race and gender. She reduces immigration concerns to “fear of brown people,” even as she notes elsewhere that a quarter of Latinos also favor a wall at the southern border. This contrasts startlingly with her succinct observation that “if you don’t want to drive working-class whites to be attracted to the likes of Limbaugh, stop insulting them.” In one particularly obtuse moment, she asserts: “Because I study social inequality, I know that even Malia and Sasha Obama will be disadvantaged by race, advantaged as they are by class.” She relies on dubious gender theories to explain why the majority of white women voted for Trump rather than for his unfairly maligned opponent. That Hillary Clinton epitomized every elite quality Williams has just spent more than a hundred pages explicating escapes her notice. Williams’s own reflexive retreat into identity politics is itself emblematic of our toxic divisions, but it does not invalidate the power of this astute book.
When music could not transcend evil
he story of European classical music under the Third Reich is one of the most squalid chapters in the annals of Western culture, a chronicle of collective complaisance that all but beggars belief. Without exception, all of the well-known musicians who left Germany and Austria in protest when Hitler came to power in 1933 were either Jewish or, like the violinist Adolf Busch, Rudolf Serkin’s father-in-law, had close family ties to Jews. Moreover, most of the small number of non-Jewish musicians who emigrated later on, such as Paul Hindemith and Lotte Lehmann, are now known to have done so not out of principle but because they were unable to make satisfactory accommodations with the Nazis. Everyone else—including Karl Böhm, Wilhelm Furtwängler, Walter Gieseking, Herbert von Karajan, and Richard Strauss—stayed behind and served the Reich.
The Berlin and Vienna Philharmonics, then as now Europe’s two greatest orchestras, were just as willing to do business with Hitler and his henchmen, firing their Jewish members and ceasing to perform the music of Jewish composers. Even after the war, the Vienna Philharmonic was notorious for being the most anti-Semitic orchestra in Europe, and it was well known in the music business (though never publicly discussed) that Helmut Wobisch, the orchestra’s principal trumpeter and its executive director from 1953 to 1968, had been both a member of the SS and a Gestapo spy.
The management of the Berlin Philharmonic made no attempt to cover up the orchestra’s close relationship with the Third Reich, no doubt because the Nazi ties of Karajan, who was its music director from 1956 until shortly before his death in 1989, were a matter of public record. Yet it was not until 2007 that a full-length study of its wartime activities, Misha Aster’s The Reich’s Orchestra: The Berlin Philharmonic 1933–1945, was finally published. As for the Vienna Philharmonic, its managers long sought to quash all discussion of the orchestra’s Nazi past, steadfastly refusing to open its institutional archives to scholars until 2008, when Fritz Trümpi, an Austrian scholar, was given access to its records. Five years later, the Viennese, belatedly following the precedent of the Berlin Philharmonic, added a lengthy section to their website called “The Vienna Philharmonic Under National Socialism (1938–1945),” in which the damning findings of Trümpi and two other independent scholars were made available to the public.
Now Trümpi has published The Political Orchestra: The Vienna and Berlin Philharmonics During the Third Reich, in which he tells how they came to terms with Nazism, supplying pre- and postwar historical context for their transgressions.1 Written in a stiff mixture of academic jargon and translatorese, The Political Orchestra is ungratifying to read. Even so, the tale that it tells is both compelling and disturbing, especially to anyone who clings to the belief that high art is ennobling to the spirit.U
nlike the Vienna Philharmonic, which has always doubled as the pit orchestra for the Vienna State Opera, the Berlin Philharmonic started life in 1882 as a fully independent, self-governing entity. Initially unsubsidized by the state, it kept itself afloat by playing a grueling schedule of performances, including “popular” non-subscription concerts for which modest ticket prices were levied. In addition, the orchestra made records and toured internationally at a time when neither was common.
These activities made it possible for the Berlin Philharmonic to develop into an internationally renowned ensemble whose fabled collective virtuosity was widely seen as a symbol of German musical distinction. Furtwängler, the orchestra’s principal conductor, declared in 1932 that the German music in which it specialized was “one of the very few things that actually contribute to elevating [German] prestige.” Hence, he explained, the need for state subsidy, which he saw as “a matter of [national] prestige, that is, to some extent a requirement of national prudence.” By then, though, the orchestra was already heavily subsidized by the city of Berlin, thus paving the way for its takeover by the Nazis.
The Vienna Philharmonic, by contrast, had always been subsidized. Founded in 1842 when the orchestra of what was then the Vienna Court Opera decided to give symphonic concerts on its own, it performed the Austro-German classics for an elite cadre of longtime subscribers. By restricting membership to local players and their pupils, the orchestra cultivated what Furtwängler, who spent as much time conducting in Vienna as in Berlin, described as a “homogeneous and distinct tone quality.” At once dark and sweet, it was as instantly identifiable—and as characteristically Viennese—as the strong, spicy bouquet of a Gewürztraminer wine.
Unlike the Berlin Philharmonic, which played for whoever would pay the tab and programmed new music as a matter of policy, the Vienna Philharmonic chose not to diversify either its haute-bourgeois audience or its conservative repertoire. Instead, it played Beethoven, Brahms, Haydn, Mozart, and Schubert (and, later, Bruckner and Richard Strauss) in Vienna for the Viennese. Starting in the ’20s, the orchestra’s recordings consolidated its reputation as one of the world’s foremost instrumental ensembles, but its internal culture remained proudly insular.
What the two orchestras had in common was a nationalistic ethos, a belief in the superiority of Austro-German musical culture that approached triumphalism. One of the darkest manifestations of this ethos was their shared reluctance to hire Jews. The Berlin Philharmonic employed only four Jewish players in 1933, while the Vienna Philharmonic contained only 11 Jews at the time of the Anschluss, none of whom was hired after 1920. To be sure, such popular Jewish conductors as Otto Klemperer and Bruno Walter continued to work in Vienna for as long as they could. Two months before the Anschluss, Walter led and recorded a performance of the Ninth Symphony of Gustav Mahler, his musical mentor and fellow Jew, who from 1897 to 1907 had been the director of the Vienna Court Opera and one of the Philharmonic’s most admired conductors. But many members of both orchestras were open supporters of fascism, and not a few were anti-Semites who ardently backed Hitler. By 1942, 62 of the 123 active members of the Vienna Philharmonic were Nazi party members.
The admiration that Austro-German classical musicians had for Hitler is not entirely surprising since he was a well-informed music lover who declared in 1938 that “Germany has become the guardian of European culture and civilization.” He made the support of German art, music very much included, a key part of his political program. Accordingly, the Berlin Philharmonic was placed under the direct supervision of Joseph Goebbels, who ensured the cooperation of its members by repeatedly raising their salaries, exempting them from military service, and guaranteeing their old-age pensions. But there had never been any serious question of protest, any more than there would be among the members of the Vienna Philharmonic when the Nazis gobbled up Austria. Save for the Jews and one or two non-Jewish players who were fired for reasons of internal politics, the musicians went along unhesitatingly with Hitler’s desires.
With what did they go along? Above all, they agreed to the scrubbing of Jewish music from their programs and the dismissal of their Jewish colleagues. Some Jewish players managed to escape with their lives, but seven of the Vienna Philharmonic’s 11 Jews were either murdered by the Nazis or died as a direct result of official persecution. In addition, both orchestras performed regularly at official government functions and made tours and other public appearances for propaganda purposes, and both were treated as gems in the diadem of Nazi culture.
As for Furtwängler, the most prominent of the Austro-German orchestral conductors who served the Reich, his relationship to Nazism continues to be debated to this day. He had initially resisted the firing of the Berlin Philharmonic’s Jewish members and protected them for as long as he could. But he was also a committed (if woolly-minded) nationalist who believed that German music had “a different meaning for us Germans than for other nations” and notoriously declared in an open letter to Goebbels that “we all welcome with great joy and gratitude . . . the restoration of our national honor.” Thereafter he cooperated with the Nazis, by all accounts uncomfortably but—it must be said—willingly. A monster of egotism, he saw himself as the greatest living exponent of German music and believed it to be his duty to stay behind and serve a cause higher than what he took to be mere party politics. “Human beings are free wherever Wagner and Beethoven are played, and if they are not free at first, they are freed while listening to these works,” he naively assured a horrified Arturo Toscanini in 1937. “Music transports them to regions where the Gestapo can do them no harm.”O
nce the war was over, the U.S. occupation forces decided to enlist the Berlin Philharmonic in the service of a democratic, anti-Soviet Germany. Furtwängler and Herbert von Karajan, who succeeded him as principal conductor, were officially “de-Nazified” and their orchestra allowed to function largely undisturbed, though six Nazi Party members were fired. The Vienna Philharmonic received similarly privileged treatment.
Needless to say, there was more to this decision than Cold War politics. No one questioned the unique artistic stature of either orchestra. Moreover, the Vienna Philharmonic, precisely because of its insularity, was now seen as a living museum piece, a priceless repository of 19th-century musical tradition. Still, many musicians and listeners, Jews above all, looked askance at both orchestras for years to come, believing them to be tainted by Nazism.
Indeed they were, so much so that they treated many of their surviving Jewish ex-members in a way that can only be described as vicious. In the most blatant individual case, the violinist Szymon Goldberg, who had served as the Berlin Philharmonic’s concertmaster under Furtwängler, was not allowed to reassume his post in 1945 and was subsequently denied a pension. As for the Vienna Philharmonic, the fact that it made Helmut Wobisch its executive director says everything about its deep-seated unwillingness to face up to its collective sins.
Be that as it may, scarcely any prominent musicians chose to boycott either orchestra. Leonard Bernstein went so far as to affect a flippant attitude toward the morally equivocal conduct of the Austro-German artists whom he encountered in Europe after the war. Upon meeting Herbert von Karajan in 1954, he actually told his wife Felicia that he had become “real good friends with von Karajan, whom you would (and will) adore. My first Nazi.”
At the same time, though, Bernstein understood what he was choosing to overlook. When he conducted the Vienna Philharmonic for the first time in 1966, he wrote to his parents:
I am enjoying Vienna enormously—as much as a Jew can. There are so many sad memories here; one deals with so many ex-Nazis (and maybe still Nazis); and you never know if the public that is screaming bravo for you might contain someone who 25 years ago might have shot me dead. But it’s better to forgive, and if possible, forget. The city is so beautiful, and so full of tradition. Everyone here lives for music, especially opera, and I seem to be the new hero.
Did Bernstein sell his soul for the opportunity to work with so justly renowned an orchestra—and did he get his price by insisting that its members perform the symphonies of Mahler, with which he was by then closely identified? It is a fair question, one that does not lend itself to easy answers.
Even more revealing is the case of Bruno Walter, who never forgave Furtwängler for staying behind in Germany, informing him in an angry letter that “your art was used as a conspicuously effective means of propaganda for the regime of the Devil.” Yet Walter’s righteous anger did not stop him from conducting in Vienna after the war. Born in Berlin, he had come to identify with the Philharmonic so closely that it was impossible for him to seriously consider quitting its podium permanently. “Spiritually, I was a Viennese,” he wrote in Theme and Variations, his 1946 autobiography. In 1952, he made a second recording with the Vienna Philharmonic of Mahler’s Das Lied von der Erde, whose premiere he had conducted in 1911 and which he had recorded in Vienna 15 years earlier. One wonders what Walter, who had converted to Christianity but had been driven out of both his native lands for the crime of being Jewish, made of the text of the last movement: “My friend, / On this earth, fortune has not been kind to me! / Where do I go?”
As for the two great orchestras of the Third Reich, both have finally acknowledged their guilt and been forgiven, at least by those who know little of their past. It would occur to no one to decline on principle to perform with either group today. Such a gesture would surely be condemned as morally ostentatious, an exercise in what we now call virtue-signaling. Yet it is impossible to forget what Samuel Lipman wrote in 1993 in Commentary apropos the wartime conduct of Furtwängler: “The ultimate triumph of totalitarianism, I suppose it can be said, is that under its sway only a martyred death can be truly moral.” For the only martyrs of the Berlin and Vienna Philharmonics were their Jews. The orchestras themselves live on, tainted and beloved.
He knows what to reveal and what to conceal, understands the importance of keeping the semblance of distance between oneself and the story of the day, and comprehends the ins and outs of anonymous sourcing. Within days of his being fired by President Trump on May 9, for example, little green men and women, known only as his “associates,” began appearing in the pages of the New York Times and Washington Post to dispute key points of the president’s account of his dismissal and to promote Comey’s theory of the case.
“In a Private Dinner, Trump Demanded Loyalty,” the New York Times reported on May 11. “Comey Demurred.” The story was a straightforward narrative of events from Comey’s perspective, capped with an obligatory denial from the White House. The next day, the Washington Post reported, “Comey associates dispute Trump’s account of conversations.” The Post did not identify Comey’s associates, other than saying that they were “people who have worked with him.”
Maybe they were the same associates who had gabbed to the Times. Or maybe they were different ones. Who can tell? Regardless, the story these particular associates gave to the Post was readable and gripping. Comey, the Post reported, “was wary of private meetings and discussions with the president and did not offer the assurance, as Trump has claimed, that Trump was not under investigation as part of the probe into Russian interference in last year’s election.”
On May 16, Michael S. Schmidt of the Times published his scoop, “Comey Memo Says Trump Asked Him to End Flynn Investigation.” Schmidt didn’t see the memo for himself. Parts of it were read to him by—you guessed it—“one of Mr. Comey’s associates.” The following day, Robert Mueller was appointed special counsel to oversee the Russia investigation. On May 18, the Times, citing “two people briefed” on a call between Comey and the president, reported, “Comey, Unsettled by Trump, Is Said to Have Wanted Him Kept at a Distance.” And by the end of that week, Comey had agreed to testify before the Senate Intelligence Committee.
As his testimony approached, Comey’s people became more aggressive in their criticisms of the president. “Trump Should Be Scared, Comey Friend Says,” read the headline of a CNN interview with Brookings Institution fellow Benjamin Wittes. This “Comey friend” said he was “very shocked” when he learned that President Trump had asked Comey for loyalty. “I have no doubt that he regarded the group of people around the president as dishonorable,” Wittes said.
Comey, Wittes added, was so uncomfortable at the White House reception in January honoring law enforcement—the one where Comey lumbered across the room and Trump whispered something in his ear—that, as CNN paraphrased it, he “stood in a position so that his blue blazer would blend in with the room’s blue drapes in an effort for Trump to not notice him.” The integrity, the courage—can you feel it?
On June 6, the day before Comey’s prepared testimony was released, more “associates” told ABC that the director would “not corroborate Trump’s claim that on three separate occasions Comey told the president he was not under investigation.” And a “source with knowledge of Comey’s testimony” told CNN the same thing. In addition, ABC reported that, according to “a source familiar with Comey’s thinking,” the former director would say that Trump’s actions stopped short of obstruction of justice.
Maybe those sources weren’t as “familiar with Comey’s thinking” as they thought or hoped? To maximize the press coverage he already dominated, Comey had authorized the Senate Intelligence Committee to release his testimony ahead of his personal interview. That testimony told a different story than what had been reported by CNN and ABC (and by the Post on May 12). Comey had in fact told Trump the president was not under investigation—on January 6, January 27, and March 30. Moreover, the word “obstruction” did not appear at all in his written text. The senators asked Comey if he felt Trump obstructed justice. He declined to answer either way.
My guess is that Comey’s associates lacked Comey’s scalpel-like, almost Jesuitical ability to make distinctions, and therefore misunderstood what he was telling them to say to the press. Because it’s obvious Comey was the one behind the stories of Trump’s dishonesty and bad behavior. He admitted as much in front of the cameras in a remarkable exchange with Senator Susan Collins of Maine.
Comey said that, after Trump tweeted on May 12 that he’d better hope there aren’t “tapes” of their conversations, “I asked a friend of mine to share the content of the memo with a reporter. Didn’t do it myself, for a variety of reasons. But I asked him to, because I thought that might prompt the appointment of a special counsel. And so I asked a close friend of mine to do it.”
Collins asked whether that friend had been Wittes, known to cable news junkies as Comey’s bestie. Comey said no. The source for the New York Times article was “a good friend of mine who’s a professor at Columbia Law School,” Daniel Richman.
Every time I watch or read that exchange, I am amazed. Here is the former director of the FBI just flat-out admitting that, for months, he wrote down every interaction he had with the president of the United States because he wanted a written record in case the president ever fired or lied about him. And when the president did fire and lie about him, that director set in motion a series of public disclosures with the intent of not only embarrassing the president, but also forcing the appointment of a special counsel who might end up investigating the president for who knows what. And none of this would have happened if the president had not fired Comey or tweeted about him. He told the Senate that if Trump hadn’t dismissed him, he most likely would still be on the job.
Rarely, in my view, are high officials so transparent in describing how Washington works. Comey revealed to the world that he was keeping a file on his boss, that he used go-betweens to get his story into the press, that “investigative journalism” is often just powerful people handing documents to reporters to further their careers or agendas or even to get revenge. And as long as you maintain some distance from the fallout, and stick to the absolute letter of the law, you will come out on top, so long as you have a small army of nightingales singing to reporters on your behalf.
“It’s the end of the Comey era,” A.B. Stoddard said on Special Report with Bret Baier the other day. On the contrary: I have a feeling that, as the Russia investigation proceeds, we will be hearing much more from Comey. And from his “associates.” And his “friends.” And persons “familiar with his thinking.”
In April, COMMENTARY asked a wide variety of writers,
thinkers, and broadcasters to respond to this question: Is free speech under threat in the United States? We received twenty-seven responses. We publish them here in alphabetical order.
Floyd AbramsFree expression threatened? By Donald Trump? I guess you could say so.
When a president engages in daily denigration of the press, when he characterizes it as the enemy of the people, when he repeatedly says that the libel laws should be “loosened” so he can personally commence more litigation, when he says that journalists shouldn’t be allowed to use confidential sources, it is difficult even to suggest that he has not threatened free speech. And when he says to the head of the FBI (as former FBI director James Comey has said that he did) that Comey should consider “putting reporters in jail for publishing classified information,” it is difficult not to take those threats seriously.
The harder question, though, is this: How real are the threats? Or, as Michael Gerson put it in the Washington Post: Will Trump “go beyond mere Twitter abuse and move against institutions that limit his power?” Some of the president’s threats against the institution of the press, wittingly or not, have been simply preposterous. Surely someone has told him by now that neither he nor Congress can “loosen” libel laws; while each state has its own libel law, there is no federal libel law and thus nothing for him to loosen. What he obviously takes issue with is the impact that the Supreme Court’s 1964 First Amendment opinion in New York Times v. Sullivan has had on state libel laws. The case determined that public officials who sue for libel may not prevail unless they demonstrate that the statements made about them were false and were made with actual knowledge or suspicion of that falsity. So his objection to the rules governing libel law is to nothing less than the application of the First Amendment itself.
In other areas, however, the Trump administration has far more power to imperil free speech. We live under an Espionage Act, adopted a century ago, which is both broad in its language and uncommonly vague in its meaning. As such, it remains a half-open door through which an administration that is hostile to free speech might walk. Such an administration could initiate criminal proceedings against journalists who write about defense- or intelligence-related topics on the basis that classified information was leaked to them by present or former government employees. No such action has ever been commenced against a journalist. Press lawyers and civil-liberties advocates have strong arguments that the law may not be read so broadly and still be consistent with the First Amendment. But the scope of the Espionage Act and the impact of the First Amendment upon its interpretation remain unknown.
A related area in which the attitude of an administration toward the press may affect the latter’s ability to function as a check on government relates to the ability of journalists to protect the identity of their confidential sources. The Obama administration prosecuted more Espionage Act cases against sources of information to journalists than all prior administrations combined. After a good deal of deserved press criticism, it agreed to expand the internal guidelines of the Department of Justice designed to limit the circumstances under which such source revelation is demanded. But the guidelines are none too protective and are, after all, simply guidelines. A new administration is free to change or limit them or, in fact, abandon them altogether. In this area, as in so many others, it is too early to judge the ultimate treatment of free expression by the Trump administration. But the threats are real, and there is good reason to be wary.
Floyd Abrams is the author of The Soul of the First Amendment (Yale University Press, 2017).
Ayaan Hirsi AliFreedom of speech is being threatened in the United States by a nascent culture of hostility to different points of view. As political divisions in America have deepened, a conformist mentality of “right thinking” has spread across the country. Increasingly, American universities, where no intellectual doctrine ought to escape critical scrutiny, are some of the most restrictive domains when it comes to asking open-ended questions on subjects such as Islam.
Legally, speech in the United States is protected to a degree unmatched in almost any industrialized country. The U.S. has avoided unpredictable Canadian-style restrictions on speech, for example. I remain optimistic that as long as we have the First Amendment in the U.S., any attempt at formal legal censorship will be vigorously challenged.
Culturally, however, matters are very different in America. The regressive left is the forerunner threatening free speech on any issue that is important to progressives. The current pressure coming from those who call themselves “social-justice warriors” is unlikely to lead to successful legislation to curb the First Amendment. Instead, censorship is spreading in the cultural realm, particularly at institutions of higher learning.
The way activists of the regressive left achieve silence or censorship is by creating a taboo, and one of the most pernicious taboos in operation today is the word “Islamophobia.” Islamists are similarly motivated to rule any critical scrutiny of Islamic doctrine out of order. There is now a university center (funded by Saudi money) in the U.S. dedicated to monitoring and denouncing incidences of “Islamophobia.”
The term “Islamophobia” is used against critics of political Islam, but also against progressive reformers within Islam. The term implies an irrational fear that is tainted by hatred, and it has had a chilling effect on free speech. In fact, “Islamophobia” is a poorly defined term. Islam is not a race, and it is very often perfectly rational to fear some expressions of Islam. No set of ideas should be beyond critical scrutiny.
To push back in this cultural realm—in our universities, in public discourse—those favoring free speech should focus more on the message of dawa, the set of ideas that the Islamists want to promote. If the aims of dawa are sufficiently exposed, ordinary Americans and Muslim Americans will reject it. The Islamist message is a message of divisiveness, misogyny, and hatred. It’s anachronistic and wants people to live by tribal norms dating from the seventh century. The best antidote to Islamic extremism is the revelation of what its primary objective is: a society governed by Sharia. This is the opposite of censorship: It is documenting reality. What is life like in Saudi Arabia, Iran, the Northern Nigerian States? What is the true nature of Sharia law?
Islamists want to hide the true meaning of Sharia, Jihad, and the implications for women, gays, religious minorities, and infidels under the veil of “Islamophobia.” Islamists use “Islamophobia” to obfuscate their vision and imply that any scrutiny of political Islam is hatred and bigotry. The antidote to this is more exposure and more speech.
As pressure on freedom of speech increases from the regressive left, we must reject the notions that only Muslims can speak about Islam, and that any critical examination of Islamic doctrines is inherently “racist.”
Instead of contorting Western intellectual traditions so as not to offend our Muslim fellow citizens, we need to defend the Muslim dissidents who are risking their lives to promote the human rights we take for granted: equality for women, tolerance of all religions and orientations, our hard-won freedoms of speech and thought.
It is by nurturing and protecting such speech that progressive reforms can emerge within Islam. By accepting the increasingly narrow confines of acceptable discourse on issues such as Islam, we do dissidents and progressive reformers within Islam a grave disservice. For truly progressive reforms within Islam to be possible, full freedom of speech will be required.
Ayaan Hirsi Ali is a research fellow at the Hoover Institution, Stanford University, and the founder of the AHA Foundation.
Lee C. BollingerI know it is too much to expect that political discourse mimic the measured, self-questioning, rational, footnoting standards of the academy, but there is a difference between robust political debate and political debate infected with fear or panic. The latter introduces a state of mind that is visceral and irrational. In the realm of fear, we move beyond the reach of reason and a sense of proportionality. When we fear, we lose the capacity to listen and can become insensitive and mean.
Our Constitution is well aware of this fact about the human mind and of its negative political consequences. In the First Amendment jurisprudence established over the past century, we find many expressions of the problematic state of mind that is produced by fear. Among the most famous and potent is that of Justice Brandeis in Whitney v. California in 1927, one of the many cases involving aggravated fears of subversive threats from abroad. “It is the function of (free) speech,” he said, “to free men from the bondage of irrational fears.” “Men feared witches,” Brandeis continued, “and burned women.”
Today, our “witches” are terrorists, and Brandeis’s metaphorical “women” include the refugees (mostly children) and displaced persons, immigrants, and foreigners whose lives have been thrown into suspension and doubt by policies of exclusion.
The same fears of the foreign that take hold of a population inevitably infect our internal interactions and institutions, yielding suppression of unpopular and dissenting voices, victimization of vulnerable groups, attacks on the media, and the rise of demagoguery, with its disdain for facts, reason, expertise, and tolerance.
All of this poses a very special obligation on those of us within universities. Not only must we make the case in every venue for the values that form the core of who we are and what we do, but we must also live up to our own principles of free inquiry and fearless engagement with all ideas. This is why recent incidents on a handful of college campuses disrupting and effectively censoring speakers is so alarming. Such acts not only betray a basic principle but also inflame a rising prejudice against the academic community, and they feed efforts to delegitimize our work, at the very moment when it’s most needed.
I do not for a second support the view that this generation has an unhealthy aversion to engaging differences of opinion. That is a modern trope of polarization, as is the portrayal of universities as hypocritical about academic freedom and political correctness. But now, in this environment especially, universities must be at the forefront of defending the rights of all students and faculty to listen to controversial voices, to engage disagreeable viewpoints, and to make every effort to demonstrate our commitment to the sort of fearless and spirited debate that we are simultaneously asking of the larger society. Anyone with a voice can shout over a speaker; but being able to listen to and then effectively rebut those with whom we disagree—particularly those who themselves peddle intolerance—is one of the greatest skills our education can bestow. And it is something our democracy desperately needs more of. That is why, I say to you now, if speakers who are being denied access to other campuses come here, I will personally volunteer to introduce them, and listen to them, however much I may disagree with them. But I will also never hesitate to make clear why I disagree with them.
Lee C. Bollinger is the 19th president of Columbia University and the author of Uninhibited, Robust, and Wide-Open: A Free Press for a New Century. This piece has been excerpted from President Bollinger’s May 17 commencement address.
Richard A. Epstein
Today, the greatest threat to the constitutional protection of freedom of speech comes from campus rabble-rousers who invoke this very protection. In their book, the speech of people like Charles Murray and Heather Mac Donald constitutes a form of violence, bordering on genocide, that receives no First Amendment protection. Enlightened protestors are both bound and entitled to shout them down, by force or other disruptive actions, if their universities are so foolish as to extend them an invitation to speak. Any indignant minority may take the law into its own hands to eradicate the intellectual cancer before it spreads on their own campus.
By such tortured logic, a new generation of vigilantes distorts the First Amendment doctrine: Speech becomes violence, and violence becomes heroic acts of self-defense. The standard First Amendment interpretation emphatically rejects that view. Of course, the First Amendment doesn’t let you say what you want when and wherever you want to. Your freedom of speech is subject to the same limitations as your freedom of action. So you have no constitutional license to assault other people, to lie to them, or to form cartels to bilk them in the marketplace. But folks such as Murray, Mac Donald, and even Yiannopoulos do not come close to crossing into that forbidden territory. They are not using, for example, “fighting words,” rightly limited to words or actions calculated to provoke immediate aggression against a known target. Fighting words are worlds apart from speech that provokes a negative reaction in those who find your speech offensive solely because of the content of its message.
This distinction is central to the First Amendment. Fighting words have to be blocked by well-tailored criminal and civil sanctions lest some people gain license to intimidate others from speaking or peaceably assembling. The remedy for mere offense is to speak one’s mind in response. But it never gives anyone the right to block the speech of others, lest everyone be able to unilaterally increase his sphere of action by getting really angry about the beliefs of others. No one has the right to silence others by working himself into a fit of rage.
Obviously, it is intolerable to let mutual animosity generate factional warfare, whereby everyone can use force to silence rivals. To avoid this war of all against all, each side claims that only its actions are privileged. These selective claims quickly degenerate into a form of viewpoint discrimination, which undermines one of the central protections that traditional First Amendment law erects: a wall against each and every group out to destroy the level playing field on which robust political debate rests. Every group should be at risk for having its message fall flat. The new campus radicals want to upend that understanding by shutting down their adversaries if their universities do not. Their aggression must be met, if necessary, by counterforce. Silence in the face of aggression is not an acceptable alternative.
Richard A. Epstein is the Laurence A. Tisch Professor of Law at the New York University School of Law.
David FrenchWe’re living in the midst of a troubling paradox. At the exact same time that First Amendment jurisprudence has arguably never been stronger and more protective of free expression, millions of Americans feel they simply can’t speak freely. Indeed, talk to Americans living and working in the deep-blue confines of the academy, Hollywood, and the tech sector, and you’ll get a sense of palpable fear. They’ll explain that they can’t say what they think and keep their jobs, their friends, and sometimes even their families.
The government isn’t cracking down or censoring; instead, Americans are using free speech to destroy free speech. For example, a social-media shaming campaign is an act of free speech. So is an economic boycott. So is turning one’s back on a public speaker. So is a private corporation firing a dissenting employee for purely political reasons. Each of these actions is largely protected from government interference, and each one represents an expression of the speaker’s ideas and values.
The problem, however, is obvious. The goal of each of these kinds of actions isn’t to persuade; it’s to intimidate. The goal isn’t to foster dialogue but to coerce conformity. The result is a marketplace of ideas that has been emptied of all but the approved ideological vendors—at least in those communities that are dominated by online thugs and corporate bullies. Indeed, this mindset has become so prevalent that in places such as Portland, Berkeley, Middlebury, and elsewhere, the bullies and thugs have crossed the line from protected—albeit abusive—speech into outright shout-downs and mob violence.
But there’s something else going on, something that’s insidious in its own way. While politically correct shaming still has great power in deep-blue America, its effect in the rest of the country is to trigger a furious backlash, one characterized less by a desire for dialogue and discourse than by its own rage and scorn. So we’re moving toward two Americas—one that ruthlessly (and occasionally illegally) suppresses dissenting speech and the other that is dangerously close to believing that the opposite of political correctness isn’t a fearless expression of truth but rather the fearless expression of ideas best calculated to enrage your opponents.
The result is a partisan feedback loop where right-wing rage spurs left-wing censorship, which spurs even more right-wing rage. For one side, a true free-speech culture is a threat to feelings, sensitivities, and social justice. The other side waves high the banner of “free speech” to sometimes elevate the worst voices to the highest platforms—not so much to protect the First Amendment as to infuriate the hated “snowflakes” and trigger the most hysterical overreactions.
The culturally sustainable argument for free speech is something else entirely. It reminds the cultural left of its own debt to free speech while reminding the political right that a movement allegedly centered around constitutional values can’t abandon the concept of ordered liberty. The culture of free speech thrives when all sides remember their moral responsibilities—to both protect the right of dissent and to engage in ideological combat with a measure of grace and humility.
David French is a senior writer at National Review.
Pamela GellerThe real question isn’t whether free speech is under threat in the United States, but rather, whether it’s irretrievably lost. Can we get it back? Not without war, I suspect, as is evidenced by the violence at colleges whenever there’s the shamefully rare event of a conservative speaker on campus.
Free speech is the soul of our nation and the foundation of all our other freedoms. If we can’t speak out against injustice and evil, those forces will prevail. Freedom of speech is the foundation of a free society. Without it, a tyrant can wreak havoc unopposed, while his opponents are silenced.
With that principle in mind, I organized a free-speech event in Garland, Texas. The world had recently been rocked by the murder of the Charlie Hebdo cartoonists. My version of “Je Suis Charlie” was an event here in America to show that we can still speak freely and draw whatever we like in the Land of the Free. Yet even after jihadists attacked our event, I was blamed—by Donald Trump among others—for provoking Muslims. And if I tried to hold a similar event now, no arena in the country would allow me to do so—not just because of the security risk, but because of the moral cowardice of all intellectual appeasers.
Under what law is it wrong to depict Muhammad? Under Islamic law. But I am not a Muslim, I don’t live under Sharia. America isn’t under Islamic law, yet for standing for free speech, I’ve been:
- Prevented from running our advertisements in every major city in this country. We have won free-speech lawsuits all over the country, which officials circumvent by prohibiting all political ads (while making exceptions for ads from Muslim advocacy groups);
- Shunned by the right, shut out of the Conservative Political Action Conference;
- Shunned by Jewish groups at the behest of terror-linked groups such as the Council on American-Islamic Relations;
- Blacklisted from speaking at universities;
- Prevented from publishing books, for security reasons and because publishers fear shaming from the left;
- Banned from Britain.
A Seattle court accused me of trying to shut down free speech after we merely tried to run an FBI poster on global terrorism, because authorities had banned all political ads in other cities to avoid running ours. Seattle blamed us for that, which was like blaming a woman for being raped because she was wearing a short skirt.
This kind of vilification and shunning is key to the left’s plan to shut down all dissent from its agenda—they make legislation restricting speech unnecessary.
The same refusal to allow our point of view to be heard has manifested itself elsewhere. The foundation of my work is individual rights and equality for all before the law. These are the foundational principles of our constitutional republic. That is now considered controversial. Truth is the new hate speech. Truth is going to be criminalized.
The First Amendment doesn’t only protect ideas that are sanctioned by the cultural and political elites. If “hate speech” laws are enacted, who would decide what’s permissible and what’s forbidden? The government? The gunmen in Garland?
There has been an inversion of the founding premise of this nation. No longer is it the subordination of might to right, but right to might. History is repeatedly deformed with the bloody consequences of this transition.
Pamela Geller is the editor in chief of the Geller Report and president of the American Freedom Defense Initiative.
Jonah GoldbergOf course free speech is under threat in America. Frankly, it’s always under threat in America because it’s always under threat everywhere. Ronald Reagan was right when he said in 1961, “Freedom is never more than one generation away from extinction. We didn’t pass it on to our children in the bloodstream. It must be fought for, protected, and handed on for them to do the same.”
This is more than political boilerplate. Reagan identified the source of the threat: human nature. God may have endowed us with a right to liberty, but he didn’t give us all a taste for it. As with most finer things, we must work to acquire a taste for it. That is what civilization—or at least our civilization—is supposed to do: cultivate attachments to certain ideals. “Cultivate” shares the same Latin root as “culture,” cultus, and properly understood they mean the same thing: to grow, nurture, and sustain through labor.
In the past, threats to free speech have taken many forms—nationalist passion, Comstockery (both good and bad), political suppression, etc.—but the threat to free speech today is different. It is less top-down and more bottom-up. We are cultivating a generation of young people to reject free speech as an important value.
One could mark the beginning of the self-esteem movement with Nathaniel Branden’s 1969 paper, “The Psychology of Self-Esteem,” which claimed that “feelings of self-esteem were the key to success in life.” This understandable idea ran amok in our schools and in our culture. When I was a kid, Saturday-morning cartoons were punctuated with public-service announcements telling kids: “The most important person in the whole wide world is you, and you hardly even know you!”
The self-esteem craze was just part of the cocktail of educational fads. Other ingredients included multiculturalism, the anti-bullying crusade, and, of course, that broad phenomenon known as “political correctness.” Combined, they’ve produced a generation that rejects the old adage “sticks and stones can break my bones but words can never harm me” in favor of the notion that “words hurt.” What we call political correctness has been on college campuses for decades. But it lacked a critical mass of young people who were sufficiently receptive to it to make it a fully successful ideology. The campus commissars welcomed the new “snowflakes” with open arms; truly, these are the ones we’ve been waiting for.
“Words hurt” is a fashionable concept in psychology today. (See Psychology Today: “Why Words Can Hurt at Least as Much as Sticks and Stones.”) But it’s actually a much older idea than the “sticks and stones” aphorism. For most of human history, it was a crime to say insulting or “injurious” things about aristocrats, rulers, the Church, etc. That tendency didn’t evaporate with the Divine Right of Kings. Jonathan Haidt has written at book length about our natural capacity to create zones of sanctity, immune from reason.
And that is the threat free speech faces today. Those who inveigh against “hate speech” are in reality fighting “heresy speech”—ideas that do “violence” to sacred notions of self-esteem, racial or gender equality, climate change, and so on. Put whatever label you want on it, contemporary “social justice” progressivism acts as a religion, and it has no patience for blasphemy.
When Napoleon’s forces converted churches into stables, the clergy did not object on the grounds that regulations regarding the proper care and feeding of animals had been violated. They complained of sacrilege and blasphemy. When Charles Murray or Christina Hoff Summers visits college campuses, the protestors are behaving like the zealous acolytes of St. Jerome. Appeals to the First Amendment have as much power over the “antifa” fanatics as appeals to Odin did to champions of the New Faith.
That is the real threat to free speech today.
Jonah Goldberg is a senior editor at National Review and a fellow at the American Enterprise Institute.
KC JohnsonIn early May, the Washington Post urged universities to make clear that “racist signs, symbols, and speech are off-limits.” Given the extraordinarily broad definition of what constitutes “racist” speech at most institutions of higher education, this demand would single out most right-of-center (and, in some cases, even centrist and liberal) discourse on issues of race or ethnicity. The editorial provided the highest-profile example of how hostility to free speech, once confined to the ideological fringe on campus, has migrated to the liberal mainstream.
The last few years have seen periodic college protests—featuring claims that significant amounts of political speech constitute “violence,” thereby justifying censorship—followed by even more troubling attempts to appease the protesters. After the mob scene that greeted Charles Murray upon his visit to Middlebury College, for instance, the student government criticized any punishment for the protesters, and several student leaders wanted to require that future speakers conform to the college’s “community standard” on issues of race, gender, and ethnicity. In the last few months, similar attempts to stifle the free exchange of ideas in the name of promoting diversity occurred at Wesleyan, Claremont McKenna, and Duke. Offering an extreme interpretation of this point of view, one CUNY professor recently dismissed dialogue as “inherently conservative,” since it reinforced the “relations of power that presently exist.”
It’s easy, of course, to dismiss campus hostility to free speech as affecting only a small segment of American public life—albeit one that trains the next generation of judges, legislators, and voters. But, as Jonathan Chait observed in 2015, denying “the legitimacy of political pluralism on issues of race and gender” has broad appeal on the left. It is only most apparent on campus because “the academy is one of the few bastions of American life where the political left can muster the strength to impose its political hegemony upon others.” During his time in office, Barack Obama generally urged fellow liberals to support open intellectual debate. But the current campus environment previews the position of free speech in a post-Obama Democratic Party, increasingly oriented around identity politics.
Waning support on one end of the ideological spectrum for this bedrock American principle should provide a political opening for the other side. The Trump administration, however, seems poorly suited to make the case. Throughout his public career, Trump has rarely supported free speech, even in the abstract, and has periodically embraced legal changes to facilitate libel lawsuits. Moreover, the right-wing populism that motivates Trump’s base has a long tradition of ideological hostility to civil liberties of all types. Even in campus contexts, conservatives have defended free speech inconsistently, as seen in recent calls that CUNY disinvite anti-Zionist fanatic Linda Sarsour as a commencement speaker.
In a sharply polarized political environment, awash in dubiously-sourced information, free speech is all the more important. Yet this same environment has seen both sides, most blatantly elements of the left on campuses, demand restrictions on their ideological foes’ free speech in the name of promoting a greater good.
KC Johnson is a professor of history at Brooklyn College and the CUNY Graduate Center.
Laura KipnisI find myself with a strange-bedfellows problem lately. Here I am, a left-wing feminist professor invited onto the pages of Commentary—though I’d be thrilled if it were still 1959—while fielding speaking requests from right-wing think tanks and libertarians who oppose child-labor laws.
Somehow I’ve ended up in the middle of the free-speech-on-campus debate. My initial crime was publishing a somewhat contentious essay about campus sexual paranoia that put me on the receiving end of Title IX complaints. Apparently I’d created a “hostile environment” at my university. I was investigated (for 72 days). Then I wrote up what I’d learned about these campus inquisitions in a second essay. Then I wrote about it all some more, in a book exposing the kangaroo-court elements of the Title IX process—and the extra-legal gag orders imposed on everyone caught in its widening snare.
I can’t really comment on whether more charges have been filed against me over the book. I’ll just say that writing about being a Title IX respondent could easily become a life’s work. I learned, shortly after writing this piece, that I and my publisher were being sued for defamation, among other things.
Is free speech under threat on American campuses? Yes. We know all about student activists who wish to shut down talks by people with opposing views. I got smeared with a bit of that myself, after a speaking invitation at Wellesley—some students made a video protesting my visit before I arrived. The talk went fine, though a group of concerned faculty circulated an open letter afterward also protesting the invitation: My views on sexual politics were too heretical, and might have offended students.
I didn’t take any of this too seriously, even as right-wing pundits crowed, with Wellesley as their latest outrage bait. It was another opportunity to mock student activists, and the fact that I was myself a feminist rather than a Charles Murray or a Milo Yiannopoulos, made them positively gleeful.
I do find myself wondering where all my new free-speech pals were when another left-wing professor, Steven Salaita, was fired (or if you prefer euphemism, “his job offer was withdrawn”) from the University of Illinois after he tweeted criticism of Israel’s Gaza policy. Sure the tweets were hyperbolic, but hyperbole and strong opinions are protected speech, too.
I guess free speech is easy to celebrate until it actually challenges something. Funny, I haven’t seen Milo around lately—so beloved by my new friends when he was bashing minorities and transgender kids. Then he mistakenly said something authentic (who knew he was capable of it!), reminiscing about an experience a lot of gay men have shared: teenage sex with older men. He tried walking it back—no, no, he’d been a victim, not a participant—but his fan base was shrieking about pedophilia and fleeing in droves. Gee, they were all so against “political correctness” a few minutes before.
It’s easy to be a free-speech fan when your feathers aren’t being ruffled. No doubt what makes me palatable to the anti-PC crowd is having thus far failed to ruffle them enough. I’m just going to have to work harder.
Laura Kipnis’s latest book is Unwanted Advances: Sexual Paranoia Comes to Campus.
Eugene KontorovichThe free and open exchange of views—especially politically conservative or traditionally religious ones—is being challenged. This is taking place not just at college campuses but throughout our public spaces and cultural institutions. James Watson was fired from the lab he led since 1968 and could not speak at New York University because of petty, censorious students who would not know DNA from LSD. Our nation’s founders and heroes are being “disappeared” from public commemoration, like Trotsky from a photograph of Soviet rulers.
These attacks on “free speech” are not the result of government action. They are not what the First Amendment protects against. The current methods—professional and social shaming, exclusion, and employment termination—are more inchoate, and their effects are multiplied by self-censorship. A young conservative legal scholar might find himself thinking: “If the late Justice Antonin Scalia can posthumously be deemed a ‘bigot’ by many academics, what chance have I?”
Ironically, artists and intellectuals have long prided themselves on being the first defenders of free speech. Today, it is the institutions of both popular and high culture that are the censors. Is there one poet in the country who would speak out for Ann Coulter?
The inhibition of speech at universities is part of a broader social phenomenon of making longstanding, traditional views and practices sinful overnight. Conservatives have not put up much resistance to this. To paraphrase Martin Niemöller’s famous dictum: “First they came for Robert E. Lee, and I said nothing, because Robert E. Lee meant nothing to me.”
The situation with respect to Israel and expressions of support for it deserves separate discussion. Even as university administrators give political power to favored ideologies by letting them create “safe spaces” (safe from opposing views), Jews find themselves and their state at the receiving end of claims of apartheid—modern day blood libels. It is not surprising if Jewish students react by demanding that they get a safe space of their own. It is even less surprising if their parents, paying $65,000 a year, want their children to have a nicer time of it. One hears Jewish groups frequently express concern about Jewish students feeling increasingly isolated and uncomfortable on campus.
But demanding selective protection from the new ideological commissars is unlikely to bring the desired results. First, this new ideology, even if it can be harnessed momentarily to give respite to harassed Jews on campus, is ultimately illiberal and will be controlled by “progressive” forces. Second, it is not so terrible for Jews in the Diaspora to feel a bit uncomfortable. It has been the common condition of Jews throughout the millennia. The social awkwardness that Jews at liberal arts schools might feel in being associated with Israel is of course one of the primary justifications for the Jewish State. Facing the snowflakes incapable of hearing a dissonant view—but who nonetheless, in the grip of intersectional ecstasy, revile Jewish self-determination—Jewish students should toughen up.
Eugene Kontorovich teaches constitutional law at Northwestern University and heads the international law department of the Kohelet Policy Forum in Jerusalem.
Nicholas LemannThere’s an old Tom Wolfe essay in which he describes being on a panel discussion at Princeton in 1965 and provoking the other panelists by announcing that America, rather than being in crisis, is in the middle of a “happiness explosion.” He was arguing that the mass effects of 20 years of post–World War II prosperity made for a larger phenomenon than the Vietnam War, the racial crisis, and the other primary concerns of intellectuals at the time.
In the same spirit, I’d say that we are in the middle of a free-speech explosion, because of 20-plus years of the Internet and 10-plus years of social media. If one understands speech as disseminated individual opinion, then surely we live in the free-speech-est society in the history of the world. Anybody with access to the unimpeded World Wide Web can say anything to a global audience, and anybody can hear anything, too. All threats to free speech should be understood in the context of this overwhelmingly reality.
It is a comforting fantasy that a genuine free-speech regime will empower mainly “good,” but previously repressed, speech. Conversely, repressive regimes that are candid enough to explain their anti-free-speech policies usually say that they’re not against free speech, just “bad” speech. We have to accept that more free speech probably means, in the aggregate, more bad speech, and also a weakening of the power, authority, and economic support for information professionals such as journalists. Welcome to the United States in 2017.
I am lucky enough to live and work on the campus of a university, Columbia, that has been blessedly free of successful attempts to repress free speech. Just in the last few weeks, Charles Murray and Dinesh D’Souza have spoken here without incident. But, yes, the evidently growing popularity of the idea that “hate speech” shouldn’t be permitted on campuses is a problem, especially, it seems, at small private liberal-arts colleges. We should all do our part, and I do, by frequently and publicly endorsing free-speech principles. Opposing the BDS movement falls squarely into that category.
It’s not just on campuses that free-speech vigilance is needed, though. The number-one threat to free speech, to my mind, is that the wide-open Web has been replaced by privately owned platforms such as Facebook and Google as the way most people experience the public life of the Internet. These companies are committed to banning “hate speech,” and they are eager to operate freely in countries, like China, that don’t permit free political speech. That makes for a far more consequential constrained environment than any campus’s speech code.
Also, Donald Trump regularly engages in presidentially unprecedented rhetoric demonizing people who disagree with him. He seems to think this is all in good fun, but, as we have already seen at his rallies, not everybody hears it that way. The place where Trumpism will endanger free speech isn’t in the center—the White House press room—but at the periphery, for example in the way that local police handle bumptious protestors and the journalists covering them. This is already happening around the country. If Trump were as disciplined and knowledgeable as Vladimir Putin or Recep Tayyip Erdogan, which so far he seems not to be, then free speech could be in even more serious danger from government, which in most places is its usual main enemy.
Nicholas Lemann is a professor at Columbia Journalism School and a staff writer for the New Yorker.
Michael J. LewisFree speech is a right but it is also a habit, and where the habit shrivels so will the right. If free speech today is in headlong retreat—everywhere threatened by regulation, organized harassment, and even violence—it is in part because our political culture allowed the practice of persuasive oratory to atrophy. The process began in 1973, an unforeseen side effect of Roe v. Wade. Legislators were delighted to learn that by relegating this divisive matter of public policy to the Supreme Court and adopting a merely symbolic position, they could sit all the more safely in their safe seats.
Since then, one crucial question of public policy after another has been punted out of the realm of politics and into the judicial. Issues that might have been debated with all the rhetorical agility of a Lincoln and a Douglas, and then subjected to a process of negotiation, compromise, and voting, have instead been settled by decree: e.g., Chevron, Kelo, Obergefell. The consequences for speech have been pernicious. Since the time of Pericles, deliberative democracy has been predicated on the art of persuasion, which demands the forceful clarity of thought and expression without which no one has ever been persuaded. But a legislature that relegates its authority to judges and regulators will awaken to discover its oratorical culture has been stunted. When politicians, rather than seeking to convince and win over, prefer to project a studied and pleasant vagueness, debate withers into tedious defensive performance. It has been decades since any presidential debate has seen any sustained give and take over a matter of policy. If there is any suspense at all, it is only the possibility that a fatigued or peeved candidate might blurt out that tactless shard of truth known as a gaffe.
A generation accustomed to hearing platitudes smoothly dispensed from behind a teleprompter will find the speech of a fearless extemporaneous speaker to be startling, even disquieting; unfamiliar ideas always are. Unhappily, they have been taught to interpret that disquiet as an injury done to them, rather than as a premise offered to them to consider. All this would not have happened—certainly not to this extent—had not our deliberative democracy decided a generation ago that it preferred the security of incumbency to the risks of unshackled debate. The compulsory contraction of free speech on college campuses is but the logical extension of the voluntary contraction of free speech in our political culture.
Michael J. Lewis’s new book is City of Refuge: Separatists and Utopian Town Planning (Princeton University Press).
Heather Mac DonaldThe answer to the symposium question depends on how powerful the transmission belt is between academia and the rest of the country. On college campuses, violence and brute force are silencing speakers who challenge left-wing campus orthodoxies. These totalitarian outbreaks have been met with listless denunciations by college presidents, followed by . . . virtually nothing. As of mid-May, the only discipline imposed for 2017’s mass attacks on free speech at UC Berkeley, Middlebury, and Clare-mont McKenna College was a letter of reprimand inserted—sometimes only temporarily—into the files of several dozen Middlebury students, accompanied by a brief period of probation. Previous outbreaks of narcis-sistic incivility, such as the screaming-girl fit at Yale and the assaults on attendees of Yale’s Buckley program, were discreetly ignored by college administrators.
Meanwhile, the professoriate unapologetically defends censorship and violence. After the February 1 riot in Berkeley to prevent Milo Yiannapoulos from speaking, Déborah Blocker, associate professor of French at UC Berkeley, praised the rioters. They were “very well-organized and very efficient,” Blocker reported admiringly to her fellow professors. “They attacked property but they attacked it very sparingly, destroying just enough University property to obtain the cancellation order for the MY event and making sure no one in the crowd got hurt” (emphasis in original). (In fact, perceived Milo and Donald Trump supporters were sucker-punched and maced; businesses downtown were torched and vandalized.) New York University’s vice provost for faculty, arts, humanities, and diversity, Ulrich Baer, displayed Orwellian logic by claiming in a New York Times op-ed that shutting down speech “should be understood as an attempt to ensure the conditions of free speech for a greater group of people.”
Will non-academic institutions take up this zeal for outright censorship? Other ideological products of the left-wing academy have been fully absorbed and operationalized. Racial victimology, which drives much of the campus censorship, is now standard in government and business. Corporate diversity trainers counsel that bias is responsible for any lack of proportional racial representation in the corporate ranks. Racial disparities in school discipline and incarceration are universally attributed to racism rather than to behavior. Public figures have lost jobs for violating politically correct taboos.
Yet Americans possess an instinctive commitment to the First Amendment. Federal judges, hardly an extension of the Federalist Society, have overwhelmingly struck down campus speech codes. It is hard to imagine that they would be any more tolerant of the hate-speech legislation so prevalent in Europe. So the question becomes: At what point does the pressure to conform to the elite worldview curtail freedom of thought and expression, even without explicit bans on speech?
Social stigma against conservative viewpoints is not the same as actual censorship. But the line can blur. The Obama administration used regulatory power to impose a behavioral conformity on public and private entities. School administrators may have technically still possessed the right to dissent from novel theories of gender, but they had to behave as if they were fully on board with the transgender revolution when it came to allowing boys to use girls’ bathrooms and locker rooms.
Had Hillary Clinton had been elected president, the federal bureaucracy would have mimicked campus diversocrats with even greater zeal. That threat, at least, has been avoided. Heresies against left-wing dogma may still enter the public arena, if only by the back door. The mainstream media have lurched even further left in the Trump era, but the conservative media, however mocked and marginalized, are expanding (though Twitter and Facebook’s censorship of conservative speakers could be a harbinger of more official silencing).
Outside the academy, free speech is still legally protected, but its exercise requires ever greater determination.
Heather Mac Donald is a fellow at the Manhattan Institute and the author of The War on Cops.
John McWhorterThere is a certain mendacity, as Brick put it in Cat on a Hot Tin Roof, in our discussion of free speech on college campuses. Namely, none of us genuinely wish that absolutely all issues be aired in the name of education and open-mindedness. To insist so is to pretend that civilized humanity makes nothing we could call advancement in philosophical consensus.
I doubt we need “free speech” on issues such as whether slavery and genocide are okay, whether it has been a mistake to view women as men’s equals, or to banish as antique the idea that whites are a master race while other peoples represent a lower rung on the Darwinian scale. With all due reverence of John Stuart Mill’s advocacy for the regular airing of even noxious views in order to reinforce clarity on why they were rejected, we are also human beings with limited time. A commitment to the Enlightenment justifiably will decree that certain views are, indeed, no longer in need of discussion.
However, our modern social-justice warriors are claiming that this no-fly zone of discussion is vaster than any conception of logic or morality justifies. We are being told that questions regarding the modern proposals about cultural appropriation, about whether even passing infelicitous statements constitute racism in the way that formalized segregation and racist disparagement did, or about whether social disparities can be due to cultural legacies rather than structural impediments, are as indisputably egregious, backwards, and abusive as the benighted views of the increasingly distant past.
That is, the new idea is not only that discrimination and inequality still exist, but that to even question the left’s utopian expectation on such matters justifies the same furious, sloganistic and even physically violent resistance that was once levelled against those designated heretics by a Christian hegemony.
Of course the protesters in question do not recognize themselves in a portrait as opponents of something called heresy. They suppose that Galileo’s opponents were clearly wrong but that they, today, are actually correct in a way that no intellectual or moral argument could coherently deny.
As such, we have students allowed to decree college campuses as “racist” when they are the least racist spaces on the planet—because they are, predictably given the imperfection of humans, not perfectly free of passingly unsavory interactions. Thinkers invited to talk for a portion of an hour from the right rather than the left and then have dinner with a few people and fly home are treated as if they were reanimated Hitlers. The student of color who hears a few white students venturing polite questions about the leftist orthodoxy is supported in fashioning these questions as “racist” rhetoric.
The people on college campuses who openly and aggressively spout this new version of Christian (or even Islamist) crusading—ironically justifying it as a barricade against “fascist” muzzling of freedom when the term applies ominously well to the regime they are fostering—are a minority. However, the sawmill spinning blade of their rhetoric has succeeding in rendering opposition as risky as espousing pedophilia, such that only those natively open to violent criticism dare speak out. The latter group is small. The campus consensus thereby becomes, if only at moralistic gunpoint à la the ISIS victim video, a strangled hard-leftism.
Hence freedom of speech is indeed threatened on today’s college campuses. I have lost count of how many of my students, despite being liberal Democrats (many of whom sobbed at Hillary Clinton’s loss last November), have told me that they are afraid to express their opinions about issues that matter, despite the fact that their opinions are ones that any liberal or even leftist person circa 1960 would have considered perfectly acceptable.
Something has shifted of late, and not in a direction we can legitimately consider forwards.
John McWhorter teaches linguistics, philosophy, and music history at Columbia University and is the author of The Language Hoax, Words on the Move, and Talking Back, Talking Black.
Kate Bachelder OdellIt’s 2021, and Harvard Square has devolved into riots: Some 120 people are injured in protests, and the carnage includes fire-consumed cop cars and smashed-in windows. The police discharge canisters of tear gas, and, after apprehending dozens of protesters, enforce a 1:45 A.M. curfew. Anyone roaming the streets after hours is subject to arrest. About 2,000 National Guardsmen are prepared to intervene. Such violence and disorder is also roiling Berkeley and other elite and educated areas.
Oh, that’s 1970. The details are from the Harvard Crimson’s account of “anti-war” riots that spring. The episode is instructive in considering whether free speech is under threat in the United States. Almost daily, there’s a new YouTube installment of students melting down over viewpoints of speakers invited to one campus or another. Even amid speech threats from government—for example, the IRS’s targeting of political opponents—nothing has captured the public’s attention like the end of free expression at America’s institutions of higher learning.
Yet disruption, confusion, and even violence are not new campus phenomena. And it’s hard to imagine that young adults who deployed brute force in the 1960s and ’70s were deeply committed to the open and peaceful exchange of ideas.
There may also be reason for optimism. The rough and tumble on campus in the 1960s and ’70s produced a more even-tempered ’80s and ’90s, and colleges are probably heading for another course correction. In covering the ruckuses at Yale, Missouri, and elsewhere, I’ve talked to professors and students who are figuring out how to respond to the illiberalism, even if the reaction is delayed. The University of Chicago put out a set of free-speech principles last year, and others schools such as Princeton and Purdue have endorsed them.
The NARPs—Non-Athletic Regular People, as they are sometimes known on campus—still outnumber the social-justice warriors, who appear to be overplaying their hand. Case in point is the University of Missouri, which experienced a precipitous drop in enrollment after instructor Melissa Click and her ilk stoked racial tensions last spring. The college has closed dorms and trimmed budgets. Which brings us to another silver lining: The economic model of higher education (exorbitant tuition to pay ever more administrators) may blow up traditional college before the fascists can.
Note also that the anti-speech movement is run by rich kids. A Brookings Institution analysis from earlier this year discovered that “the average enrollee at a college where students have attempted to restrict free speech comes from a family with an annual income $32,000 higher than that of the average student in America.” Few rank higher in average income than those at Middlebury College, where students evicted scholar Charles Murray in a particularly ugly scene. (The report notes that Murray was received respectfully at Saint Louis University, “where the median income of students’ families is half Middlebury’s.”) The impulses of over-adulated 20-year-olds may soon be tempered by the tyranny of having to show up for work on a daily basis.
None of this is to suggest that free speech is enjoying some renaissance either on campus or in America. But perhaps as the late Wall Street Journal editorial-page editor Robert Bartley put it in his valedictory address: “Things could be worse. Indeed, they have been worse.”
Kate Bachelder Odell is an editorial writer for the Wall Street Journal.
Jonathan RauchIs free speech under threat? The one-syllable answer is “yes.” The three-syllable answer is: “Yes, of course.” Free speech is always under threat, because it is not only the single most successful social idea in all of human history, it is also the single most counterintuitive. “You mean to say that speech that is offensive, untruthful, malicious, seditious, antisocial, blasphemous, heretical, misguided, or all of the above deserves government protection?” That seemingly bizarre proposition is defensible only on the grounds that the marketplace of ideas turns out to be the most powerful engine of knowledge, prosperity, liberty, social peace, and moral advancement that our species has had the good fortune to discover.
Every new generation of free-speech advocates will need to get up every morning and re-explain the case for free speech and open inquiry—today, tomorrow, and forever. That is our lot in life, and we just need to be cheerful about it. At discouraging moments, it is helpful to remember that the country has made great strides toward free speech since 1798, when the Adams administration arrested and jailed its political critics; and since the 1920s, when the U.S. government banned and burned James Joyce’s great novel Ulysses; and since 1954, when the government banned ONE, a pioneering gay journal. (The cover article was a critique of the government’s indecency censors, who censored it.) None of those things could happen today.
I suppose, then, the interesting question is: What kind of threat is free speech under today? In the present age, direct censorship by government bodies is rare. Instead, two more subtle challenges hold sway, especially, although not only, on college campuses. The first is a version of what I called, in my book Kindly Inquisitors, the humanitarian challenge: the idea that speech that is hateful or hurtful (in someone’s estimation) causes pain and thus violates others’ rights, much as physical violence does. The other is a version of what I called the egalitarian challenge: the idea that speech that denigrates minorities (again, in someone’s estimation) perpetuates social inequality and oppression and thus also is a rights violation. Both arguments call upon administrators and other bureaucrats to defend human rights by regulating speech rights.
Both doctrines are flawed to the core. Censorship harms minorities by enforcing conformity and entrenching majority power, and it no more ameliorates hatred and injustice than smashing thermometers ameliorates global warming. If unwelcome words are the equivalent of bludgeons or bullets, then the free exchange of criticism—science, in other words—is a crime. I could go on, but suffice it to say that the current challenges are new variations on ancient themes—and they will be followed, in decades and centuries to come, by many, many other variations. Memo to free-speech advocates: Our work is never done, but the really amazing thing, given the proposition we are tasked to defend, is how well we are doing.
Jonathan Rauch is a senior fellow at the Brookings Institution and the author of Kindly Inquisitors: The New Attacks on Free Thought.
Nicholas Quinn RosenkranzSpeech is under threat on American campuses as never before. Censorship in various forms is on the rise. And this year, the threat to free speech on campus took an even darker turn, toward actual violence. The prospect of Milo Yiannopoulos speaking at Berkeley provoked riots that caused more than $100,000 worth of property damage on the campus. The prospect of Charles Murray speaking at Middlebury led to a riot that put a liberal professor in the hospital with a concussion. Ann Coulter’s speech at Berkeley was cancelled after the university determined that none of the appropriate venues could be protected from “known security threats” on the date in question.
The free-speech crisis on campus is caused, at least in part, by a more insidious campus pathology: the almost complete lack of intellectual diversity on elite university faculties. At Yale, for example, the number of registered Republicans in the economics department is zero; in the psychology department, there is one. Overall, there are 4,410 faculty members at Yale, and the total number of those who donated to a Republican candidate during the 2016 primaries was three.
So when today’s students purport to feel “unsafe” at the mere prospect of a conservative speaker on campus, it may be easy to mock them as “delicate snowflakes,” but in one sense, their reaction is understandable: If students are shocked at the prospect of a Republican behind a university podium, perhaps it is because many of them have never before laid eyes on one.
To see the connection between free speech and intellectual diversity, consider the recent commencement speech of Harvard President Drew Gilpin Faust:
Universities must be places open to the kind of debate that can change ideas….Silencing ideas or basking in intellectual orthodoxy independent of facts and evidence impedes our access to new and better ideas, and it inhibits a full and considered rejection of bad ones. . . . We must work to ensure that universities do not become bubbles isolated from the concerns and discourse of the society that surrounds them. Universities must model a commitment to the notion that truth cannot simply be claimed, but must be established—established through reasoned argument, assessment, and even sometimes uncomfortable challenges that provide the foundation for truth.
Faust is exactly right. But, alas, her commencement audience might be forgiven a certain skepticism. After all, the number of registered Republicans in several departments at Harvard—e.g., history and psychology—is exactly zero. In those departments, the professors themselves may be “basking in intellectual orthodoxy” without ever facing “uncomfortable challenges.” This may help explain why some students will do everything in their power to keep conservative speakers off campus: They notice that faculty hiring committees seem to do exactly the same thing.
In short, it is a promising sign that true liberal academics like Faust have started speaking eloquently about the crucial importance of civil, reasoned disagreement. But they will be more convincing on this point when they hire a few colleagues with whom they actually disagree.
Nicholas Quinn Rosenkranz is a professor of law at Georgetown. He serves on the executive committee of Heterodox Academy, which he co-founded, on the board of directors of the Federalist Society, and on the board of directors of the Foundation for Individual Rights in Education (FIRE).
Ben ShapiroIn February, I spoke at California State University in Los Angeles. Before my arrival, professors informed students that a white supremacist would be descending on the school to preach hate; threats of violence soon prompted the administration to cancel the event. I vowed to show up anyway. One hour before the event, the administration backed down and promised to guarantee that the event could go forward, but police officers were told not to stop the 300 students, faculty, and outside protesters who blocked and assaulted those who attempted to attend the lecture. We ended up trapped in the auditorium, with the authorities telling students not to leave for fear of physical violence. I was rushed from campus under armed police guard.
Is free speech under assault?
Of course it is.
On campus, free speech is under assault thanks to a perverse ideology of intersectionality that claims victim identity is of primary value and that views are a merely secondary concern. As a corollary, if your views offend someone who outranks you on the intersectional hierarchy, your views are treated as violence—threats to identity itself. On campus, statements that offend an individual’s identity have been treated as “microaggressions”–actual aggressions against another, ostensibly worthy of violence. Words, students have been told, may not break bones, but they will prompt sticks and stones, and rightly so.
Thus, protesters around the country—leftists who see verbiage as violence—have, in turn, used violence in response to ideas they hate. Leftist local authorities then use the threat of violence as an excuse to ideologically discriminate against conservatives. This means public intellectuals like Charles Murray being run off of campus and his leftist professorial cohort viciously assaulted; it means Ann Coulter being targeted for violence at Berkeley; it means universities preemptively banning me and Ayaan Hirsi Ali and Condoleezza Rice and even Jason Riley.
The campus attacks on free speech are merely the most extreme iteration of an ideology that spans from left to right: the notion that your right to free speech ends where my feelings begin. Even Democrats who say that Ann Coulter should be allowed to speak at Berkeley say that nobody should be allowed to contribute to a super PAC (unless you’re a union member, naturally).
Meanwhile, on the right, the president’s attacks on the press have convinced many Republicans that restrictions on the press wouldn’t be altogether bad. A Vanity Fair/60 Minutes poll in late April found that 36 percent of Americans thought freedom of the press “does more harm than good.” Undoubtedly, some of that is due to the media’s obvious bias. CNN’s Jeff Zucker has targeted the Trump administration for supposedly quashing journalism, but he was silent when the Obama administration’s Department of Justice cracked down on reporters from the Associated Press and Fox News, and when hacks like Deputy National Security Adviser Ben Rhodes openly sold lies regarding Iran. But for some on the right, the response to press falsities hasn’t been to call for truth, but to instead echo Trumpian falsehoods in the hopes of damaging the media. Free speech is only important when people seek the truth. Leftists traded truth for tribalism long ago; in response, many on the right seem willing to do the same. Until we return to a common standard under which facts matter, free speech will continue to rest on tenuous grounds.
Ben Shapiro is the editor in chief of The Daily Wire and the host of The Ben Shapiro Show.
Judith ShulevitzIt’s tempting to blame college and university administrators for the decline of free speech in America, and for years I did just that. If the guardians of higher education won’t inculcate the habits of mind required for serious thinking, I thought, who will? The unfettered but civil exchange of ideas is the basic operation of education, just as addition is the basic operation of arithmetic. And universities have to teach both the unfettered part and the civil part, because arguing in a respectful manner isn’t something anyone does instinctively.
So why change my mind now? Schools still cling to speech codes, and there still aren’t enough deans like the one at the University of Chicago who declared his school a safe-space-free zone. My alma mater just handed out prizes for “enhancing race and/or ethnic relations” to two students caught on video harassing the dean of their residential college, one screaming at him that he’d created “a space for violence to happen,” the other placing his face inches away from the dean’s and demanding, “Look at me.” All this because they deemed a thoughtful if ill-timed letter about Halloween costumes written by the dean’s wife to be an act of racist aggression. Yale should discipline students who behave like that, even if they’re right on the merits (I don’t think they were, but that’s not the point). They certainly don’t deserve awards. I can’t believe I had to write that sentence.
But in abdicating their responsibilites, the universities have enabled something even worse than an attack on free speech. They’ve unleashed an assault on themselves. There’s plenty of free speech around; we know that because so much bad speech—low-minded nonsense—tests our constitutional tolerance daily, and that’s holding up pretty well. (As Nicholas Lemann observes elsewhere in this symposium, Facebook and Google represent bigger threats to free speech than students and administrators.) What’s endangered is good speech.
Universities were setting themselves up to be used. Provocateurs exploit the atmosphere on campus to goad overwrought students, then gleefully trash the most important bastion of our crumbling civil society. Higher education and everything it stands for—logical argument, the scientific method, epistemological rigor—start to look illegitimate. Voters perceive tenure and research and higher education itself as hopelessly partisan and unworthy of taxpayers’ money.
The press is a secondary victim of this process of delegitimization. If serious inquiry can be waved off as ideology, then facts won’t be facts and reporting can’t be trusted. All journalism will be equal to all other journalism, and all journalists will be reduced to pests you can slam to the ground with near impunity. Politicians will be able to say anything and do just about anything and there will be no countervailing authority to challenge them. I’m pretty sure that that way lies Putinism and Erdoganism. And when we get to that point, I’m going to start worrying about free speech again.
Judith Shulevitz is a critic in New York.
Harvey SilverglateFree speech is, and has always been, threatened. The title of Nat Hentoff’s 1993 book Free Speech for Me – but Not for Thee is no less true today than at any time, even as the Supreme Court has accorded free speech a more absolute degree of protection than in any previous era.
Since the 1980s, the high court has decided most major free-speech cases in favor of speech, with most of the major decisions being unanimous or nearly so.
Women’s-rights advocates were turned back by the high court in 1986 when they sought to ban the sale of printed materials that, because deemed pornographic by some, were alleged to promote violence against women. Censorship in the name of gender–based protection thus failed to gain traction.
Despite the demands of civil-rights activists, the Supreme Court in 1992 declared cross-burning to be a protected form of expression in R.A.V. v. City of St. Paul, a decision later refined to strengthen a narrow exception for when cross-burning occurs primarily as a physical threat rather than merely an expression of hatred.
Other attempts at First Amendment circumvention have been met with equally decisive rebuff. When the Reverend Jerry Falwell sued Hustler magazine publisher Larry Flynt for defamation growing out of a parody depicting Falwell’s first sexual encounter as a drunken tryst with his mother in an outhouse, a unanimous Supreme Court lectured on the history of parody as a constitutionally protected, even if cruel, form of social and political criticism.
When the South Boston Allied War Veterans, sponsor of Boston’s Saint Patrick’s Day parade, sought to exclude a gay veterans’ group from marching under its own banner, the high court unanimously held that as a private entity, even though marching in public streets, the Veterans could exclude any group marching under a banner conflicting with the parade’s socially conservative message, notwithstanding public-accommodations laws. The gay group could have its own parade but could not rain on that of the conservatives.
Despite such legal clarity, today’s most potent attacks on speech are coming, ironically, from liberal-arts colleges. Ubiquitous “speech codes” limit speech that might insult, embarrass, or “harass,” in particular, members of “historically disadvantaged” groups. “Safe spaces” and “trigger warnings” protect purportedly vulnerable students from hearing words and ideas they might find upsetting. Student demonstrators and threats of violence have forced the cancellation of controversial speakers, left and right.
It remains unclear how much campus censorship results from politically correct faculty, control-obsessed student-life administrators, or students socialized and indoctrinated into intolerance. My experience suggests that the bureaucrats are primarily, although not entirely, to blame. When sued, colleges either lose or settle, pay a modest amount, and then return to their censorious ways.
This trend threatens the heart and soul of liberal education. Eventually it could infect the entire society as these students graduate and assume influential positions. Whether a resulting flood of censorship ultimately overcomes legal protections and weakens democracy remains to be seen.
Harvey Silverglate, a Boston-based lawyer and writer, is the co-author of The Shadow University: The Betrayal of Liberty on America’s Campuses (Free Press, 1998). He co-founded the Foundation for Individual Rights in Education in 1999 and is on FIRE’s board of directors. He spent some three decades on the board of the ACLU of Massachusetts, two of those years as chairman. Silverglate taught at Harvard Law School for a semester during a sabbatical he took in the mid-1980s.
Christina Hoff SommersWhen Heather Mac Donald’s “blue lives matter” talk was shut down by a mob at Claremont McKenna College, the president of neighboring Pomona College sent out an email defending free speech. Twenty-five students shot back a response: “Heather Mac Donald is a fascist, a white supremacist . . . classist, and ignorant of interlocking systems of domination that produce the lethal conditions under which oppressed peoples are forced to live.”
Some blame the new campus intolerance on hypersensitive, over-trophied millennials. But the students who signed that letter don’t appear to be fragile. Nor do those who recently shut down lectures at Berkeley, Middlebury, DePaul, and Cal State LA. What they are is impassioned. And their passion is driven by a theory known as intersectionality.
Intersectionality is the source of the new preoccupation with microaggressions, cultural appropriation, and privilege-checking. It’s the reason more than 200 colleges and universities have set up Bias Response Teams. Students who overhear potentially “otherizing” comments or jokes are encouraged to make anonymous reports to their campus BRTs. A growing number of professors and administrators have built their careers around intersectionality. What is it exactly?
Intersectionality is a neo-Marxist doctrine that views racism, sexism, ableism, heterosexism, and all forms of “oppression” as interconnected and mutually reinforcing. Together these “isms” form a complex arrangement of advantages and burdens. A white woman is disadvantaged by her gender but advantaged by her race. A Latino is burdened by his ethnicity but privileged by his gender. According to intersectionality, American society is a “matrix of domination,” with affluent white males in control. Not only do they enjoy most of the advantages, they also determine what counts as “truth” and “knowledge.”
But marginalized identities are not without resources. According to one of intersectionality’s leading theorists, Patricia Collins (former president of the American Sociology Association), disadvantaged groups have access to deeper, more liberating truths. To find their voice, and to enlighten others to the true nature of reality, they require a safe space—free of microaggressive put-downs and imperious cultural appropriations. Here they may speak openly about their “lived experience.” Lived experience, according to intersectional theory, is a better guide to the truth than self-serving Western and masculine styles of thinking. So don’t try to refute intersectionality with logic or evidence: That only proves that you are part of the problem it seeks to overcome.
How could comfortably ensconced college students be open to a convoluted theory that describes their world as a matrix of misery? Don’t they flinch when they hear intersectional scholars like bell hooks refer to the U.S. as an “imperialist, white-supremacist, capitalist patriarchy”? Most take it in stride because such views are now commonplace in high-school history and social studies texts. And the idea that knowledge comes from lived experience rather than painstaking study and argument is catnip to many undergrads.
Silencing speech and forbidding debate is not an unfortunate by-product of intersectionality—it is a primary goal. How else do you dismantle a lethal system of oppression? As the protesting students at Claremont McKenna explained in their letter: “Free speech . . . has given those who seek to perpetuate systems of domination a platform to project their bigotry.” To the student activists, thinkers like Heather MacDonald and Charles Murray are agents of the dominant narrative, and their speech is “a form of violence.”
It is hard to know how our institutions of higher learning will find their way back to academic freedom, open inquiry, and mutual understanding. But as long as intersectional theory goes unchallenged, campus fanaticism will intensify.
Christina Hoff Sommers is a resident scholar at the American Enterprise Institute. She is the author of several books, including Who Stole Feminism? and The War Against Boys. She also hosts The Factual Feminist, a video blog. @Chsommers
John StosselYes, some college students do insane things. Some called police when they saw “Trump 2016” chalked on sidewalks. The vandals at Berkeley and the thugs who assaulted Charles Murray are disgusting. But they are a minority. And these days people fight back.
Someone usually videotapes the craziness. Yale’s “Halloween costume incident” drove away two sensible instructors, but videos mocking Yale’s snowflakes, like “Silence U,” make such abuse less likely. Groups like Young America’s Foundation (YAF) publicize censorship, and the Foundation for Individual Rights in Education (FIRE) sues schools that restrict speech.
Consciousness has been raised. On campus, the worst is over. Free speech has always been fragile. I once took cameras to Seton Hall law school right after a professor gave a lecture on free speech. Students seemed to get the concept. Sean, now a lawyer, said, “Protect freedom for thought we hate; otherwise you never have a society where ideas clash, and we come up with the best idea.” So I asked, “Should there be any limits?” Students listed “fighting words,” “shouting fire in a theater,” malicious libel, etc.— reasonable court-approved exceptions. But then they went further. Several wanted bans on “hate” speech, “No value comes out of hate speech,” said Javier. “It inevitably leads to violence.”
No it doesn’t, I argued, “Also, doesn’t hate speech bring ideas into the open, so you can better argue about them, bringing you to the truth?”
“No,” replied Floyd, “With hate speech, more speech is just violence.”
So I pulled out a big copy of the First Amendment and wrote, “exception: hate speech.”
Two students wanted a ban on flag desecration “to respect those who died to protect it.”
One wanted bans on blasphemy:
“Look at the gravity of the harm versus the value in blasphemy—the harm outweighs the value.”
Several wanted a ban on political speech by corporations because of “the potential for large corporations to improperly influence politicians.”
Finally, Jillian, also now a lawyer, wanted hunting videos banned.
“It encourages harm down the road.”
I asked her, incredulously, “you’re comfortable locking up people who make a hunting film?”
“Oh, yeah,” she said. “It’s unnecessary cruelty to feeling and sentient beings.”
So, I picked up my copy of the Bill of Rights again. After “no law . . . abridging freedom of speech,” I added: “Except hate speech, flag burning, blasphemy, corporate political speech, depictions of hunting . . . ”
That embarrassed them. “We may have gone too far,” said Sean. Others agreed. One said, “Cross out the exceptions.” Free speech survived, but it was a close call. Respect for unpleasant speech will always be thin. Then-Senator Hillary Clinton wanted violent video games banned. John McCain and Russ Feingold tried to ban political speech. Donald Trump wants new libel laws, and if you burn a flag, he tweeted, consequences might be “loss of citizenship or a year in jail!” Courts or popular opinion killed those bad ideas.
Free speech will survive, assuming those of us who appreciate it use it to fight those who would smother it.
John Stossel is a FOX News/FOX Business Network Contributor.
Warren TreadgoldEven citizens of dictatorships are free to praise the regime and to talk about the weather. The only speech likely to be threatened anywhere is the sort that offends an important and intolerant group. What is new in America today is a leftist ideology that threatens speech precisely because it offends certain important and intolerant groups: feminists and supposedly oppressed minorities.
So far this new ideology is clearly dominant only in colleges and universities, where it has become so strong that most controversies concern outside speakers invited by students, not faculty speakers or speakers invited by administrators. Most academic administrators and professors are either leftists or have learned not to oppose leftism; otherwise they would probably never have been hired. Administrators treat even violent leftist protestors with respect and are ready to prevent conservative and moderate outsiders from speaking rather than provoke protests. Most professors who defend conservative or moderate speakers argue that the speakers’ views are indeed noxious but say that students should be exposed to them to learn how to refute them. This is very different from encouraging a free exchange of ideas.
Although the new ideology began on campuses in the ’60s, it gained authority outside them largely by means of several majority decisions of the Supreme Court, from Roe (1973) to Obergefell (2015). The Supreme Court decisions that endanger free speech are based on a presumed consensus of enlightened opinion that certain rights favored by activists have the same legitimacy as rights explicitly guaranteed by the Constitution—or even more legitimacy, because the rights favored by activists are assumed to be so fundamental that they need no grounding in specific constitutional language. The Court majorities found restricting abortion rights or homosexual marriage, as large numbers of Americans wish to do, to be constitutionally equivalent to restricting black voting rights or interracial marriage. Any denial of such equivalence therefore opposes fundamental constitutional rights and can be considered hate speech, advocating psychological and possibly physical harm to groups like women seeking abortions or homosexuals seeking approval. Such speech may still be constitutionally protected, but acting upon it is not.
This ideology of forbidding allegedly offensive speech has spread to most of the Democratic Party and the progressive movement. Rather than seeing themselves as taking one side in a free debate, progressives increasingly argue (for example) that opposing abortion is offensive to women and supporting the police is offensive to blacks. Some politicians object so strongly to such speech that despite their interest in winning votes, they attack voters who disagree with them as racists or sexists. Expressing views that allegedly discriminate against women, blacks, homosexuals, and various other minorities can now be grounds for a lawsuit.
Speech that supposedly offends women or minorities has already cost some people their careers, their businesses, and their opportunities to deliver or hear speeches. Such intimidation is the intended result of an ideology that threatens free speech.
Warren Treadgold is a professor of history at Saint Louis University.
Matt WelchLike a sullen zoo elephant rocking back and forth from leg to leg, there is an oversized paradox we’d prefer not to see standing smack in the sightlines of most our policy debates. Day by day, even minute by minute, America simultaneously gets less free in the laboratory, but more free in the field. Individuals are constantly expanding the limits and applications of their own autonomy, even as government transcends prior restraints on how far it can reach into our intimate business.
So it is that the Internal Revenue Service can charge foreign banks with collecting taxes on U.S. citizens (therefore causing global financial institutions to shun many of the estimated 6 million-plus Americans who live abroad), even while block-chain virtuosos make illegal transactions wholly undetectable to authorities. It has never been easier for Americans to travel abroad, and it’s never been harder to enter the U.S. without showing passports, fingerprints, retinal scans, and even social-media passwords.
What’s true for banking and tourism is doubly true for free speech. Social media has given everyone not just a platform but a megaphone (as unreadable as our Facebook timelines have all become since last November). At the same time, the federal government during this unhappy 21st century has continuously ratcheted up prosecutorial pressure against leakers, whistleblowers, investigative reporters, and technology companies.
A hopeful bulwark against government encroachment unique to the free-speech field is the Supreme Court’s very strong First Amendment jurisprudence in the past decade or two. Donald Trump, like Hillary Clinton before him, may prattle on about locking up flag-burners, but Antonin Scalia and the rest of SCOTUS protected such expression back in 1990. Barack Obama and John McCain (and Hillary Clinton—she’s as bad as any recent national politician on free speech) may lament the Citizens United decision, but it’s now firmly legal to broadcast unfriendly documentaries about politicians without fear of punishment, no matter the electoral calendar.
But in this very strength lies what might be the First Amendment’s most worrying vulnerability. Barry Friedman, in his 2009 book The Will of the People, made the persuasive argument that the Supreme Court typically ratifies, post facto, where public opinion has already shifted. Today’s culture of free speech could be tomorrow’s legal framework. If so, we’re in trouble.
For evidence of free-speech slippage, just read around you. When both major-party presidential nominees react to terrorist attacks by calling to shut down corners of the Internet, and when their respective supporters are actually debating the propriety of sucker punching protesters they disagree with, it’s hard to escape the conclusion that our increasingly shrill partisan sorting is turning the very foundation of post-1800 global prosperity into just another club to be swung in our national street fight.
In the eternal cat-and-mouse game between private initiative and government control, the former is always advantaged by the latter’s fundamental incompetence. But what if the public willingly hands government the power to muzzle? It may take a counter-cultural reformation to protect this most noble of American experiments.
Matt Welch is the editor at large of Reason.
Adam. J. WhiteFree speech is indeed under threat on our university campuses, but the threat did not begin there and it will not end there. Rather, the campus free-speech crisis is a particularly visible symptom of a much more fundamental crisis in American culture.
The problem is not that some students, teachers, and administrators reject traditional American values and institutions, or even that they are willing to menace or censor others who defend those values and institutions. Such critics have always existed, and they can be expected to use the tools and weapons at their disposal. The problem is that our country seems to produce too few students, teachers, and administrators who are willing or able to respond to them.
American families produce children who arrive on campus unprepared for, or uninterested in, defending our values and institutions. For our students who are focused primarily on their career prospects (if on anything at all), “[c]ollege is just one step on the continual stairway of advancement,” as David Brooks observed 16 years ago. “They’re not trying to buck the system; they’re trying to climb it, and they are streamlined for ascent. Hence they are not a disputatious group.”
Meanwhile, parents bear incomprehensible financial burdens to get their kids through college, without a clear sense of precisely what their kids will get out of these institutions in terms of character formation or civic virtue. With so much money at stake, few can afford for their kids to pursue more than career prospects.
Those problems are not created on campus, but they are exacerbated there, as too few college professors and administrators see their institutions as cultivators of American culture and republicanism. Confronted with activists’ rage, they offer no competing vision of higher education—let alone a compelling one.
Ironically, we might borrow a solution from the Left. Where progressives would leverage state power in service of their health-care agenda, we could do the same for education. State legislatures and governors, recognizing the present crisis, should begin to reform and renegotiate the fundamental nature of state universities. By making state universities more affordable, more productive, and more reflective of mainstream American values, they will attract students—and create incentives for competing private universities to follow suit.
Let’s hope they do it soon, for what’s at stake is much more than just free speech on campus, or even free speech writ large. In our time, as in Tocqueville’s, “the instruction of the people powerfully contributes to the support of a democratic republic,” especially “where instruction which awakens the understanding is not separated from moral education which amends the heart.” We need our colleges to cultivate—not cut down—civic virtue and our capacity for self-government. “Republican government presupposes the existence of these qualities in a higher degree than any other form,” Madison wrote in Federalist 55. If “there is not sufficient virtue among men for self-government,” then “nothing less than the chains of despotism” can restrain us “from destroying and devouring one another.”
Adam J. White is a research fellow at the Hoover Institution.
Cathy YoungA writer gets expelled from the World Science Fiction Convention for criticizing the sci-fi community’s preoccupation with racial and gender “inclusivity” while moderating a panel. An assault on free speech, or an exercise of free association? How about when students demand the disinvitation of a speaker—or disrupt the speech? When a critic of feminism gets banned from a social-media platform for unspecified “abuse”?
Such questions are at the heart of many recent free-speech controversies. There is no censorship by government; but how concerned should we be when private actors effectively suppress unpopular speech? Even in the freest society, some speech will—and should—be considered odious and banished to unsavory fringes. No one weeps for ostracized Holocaust deniers or pedophilia apologists.
But shunned speech needs to remain a narrow exception—or acceptable speech will inexorably shrink. As current Federal Communications Commission chairman Ajit Pai cautioned last year, First Amendment protections will be hollowed out unless undergirded by cultural values that support a free marketplace of ideas.
Sometimes, attacks on speech come from the right. In 2003, an Iraq War critic, reporter Chris Hedges, was silenced at Rockford College in Illinois by hecklers who unplugged the microphone and rushed the stage; some conservative pundits defended this as robust protest. Yet the current climate on the left—in universities, on social media, in “progressive” journalism, in intellectual circles—is particularly hostile to free expression. The identity-politics left, fixated on subtle oppressions embedded in everyday attitudes and language, sees speech-policing as the solution.
Is hostility to free-speech values on the rise? New York magazine columnist Jesse Singal argues that support for restrictions on public speech offensive to minorities has remained steady, and fairly high, since the 1970s. Perhaps. But the range of what qualifies as offensive—and which groups are to be shielded—has expanded dramatically. In our time, a leading liberal magazine, the New Republic, can defend calls to destroy a painting of lynching victim Emmett Till because the artist is white and guilty of “cultural appropriation,” and a feminist academic journal can be bullied into apologizing for an article on transgender issues that dares to mention “male genitalia.”
There is also a distinct trend of “bad” speech being squelched by coercion, not just disapproval. That includes the incidents at Middlebury College in Vermont and at Claremont McKenna in California, where mobs not only prevented conservative speakers—Charles Murray and Heather Mac Donald—from addressing audiences but physically threatened them as well. It also includes the use of civil-rights legislation to enforce goodthink in the workplace: Businesses may face stiff fines if they don’t force employees to call a “non-binary” co-worker by the singular “they,” even when talking among themselves.
These trends make a mockery of liberalism and enable the kind of backlash we have seen with Donald Trump’s election. But the backlash can bring its own brand of authoritarianism. It’s time to start rebuilding the culture of free speech across political divisions—a project that demands, above all, genuine openness and intellectual consistency. Otherwise it will remain, as the late, great Nat Hentoff put it, a call for “free speech for me, but not for thee.”
Cathy Young is a contributing editor at Reason.
Robert J. ZimmerFree speech is not a natural feature of human society. Many people are comfortable with free expression for views they agree with but would withhold this privilege for those they deem offensive. People justify such restrictions by various means: the appeal to moral certainty, political agendas, demand for change, opposing change, retaining power, resisting authority, or, more recently, not wanting to feel uncomfortable. Moral certainty about one’s views or a willingness to indulge one’s emotions makes it easy to assert that others are doing true damage or creating unacceptable offense simply by presenting a fundamentally different perspective.
The resulting challenges to free expression may come in the form of laws, threats, pressure (whether societal, group, or organizational), or self-censorship in the face of a prevailing consensus. Specific forms of challenge may be more or less pronounced as circumstances vary. But the widespread temptation to consider the silencing of “objectionable” viewpoints as acceptable implies that the challenge to free expression is always present.
The United States today is no exception. We benefit from the First Amendment, which asserts that the government shall make no law abridging the freedom of speech. However, fostering a society supporting free expression involves matters far beyond the law. The ongoing and increasing demonization of one group by another creates a political and social environment conducive to suppressing speech. Even violent acts opposing speech can become acceptable or encouraged. Such behavior is evident at both political rallies and university events. Our greatest current threat to free expression is the emergence of a national culture that accepts the legitimacy of suppression of speech deemed objectionable by a segment of the population.
University and college campuses present a particularly vivid instance of this cultural shift. There have been many well-publicized episodes of speakers being disinvited or prevented from speaking because of their views. However, the problem is much deeper, as there is significant self-censorship on many campuses. Both faculty and students sometimes find themselves silenced by social and institutional pressures to conform to “acceptable” views. Ironically, the very mission of universities and colleges to provide a powerful and deeply enriching education for their students demands that they embrace and protect free expression and open discourse. Failing to do so significantly diminishes the quality of the education they provide.
My own institution, the University of Chicago, through the words and actions of its faculty and leaders since its founding, has asserted the importance of free expression and its essential role in embracing intellectual challenge. We continue to do so today as articulated by the Chicago Principles, which strongly affirm that “the University’s fundamental commitment is to the principle that debate or deliberation may not be suppressed because the ideas put forth are thought by some or even by most members of the University community to be offensive, unwise, immoral, or wrong-headed.” It is only in such an environment that universities can fulfill their own highest aspirations and provide leadership by demonstrating the value of free speech within society more broadly. A number of universities have joined us in reinforcing these values. But it remains to be seen whether the faculty and leaders of many institutions will truly stand up for these values, and in doing so provide a model for society as a whole.
Robert J. Zimmer is the president of the University of Chicago.