Have recent rulings by the Supreme Court subverted fundamental elements of our constitutional order? Are there circumstances in which the…
The case against the imperial judiciary has been a staple of conservative polemics since at least the 1973 Supreme Court decision in Roe v. Wade. By the late 1990’s, the editors of First Things magazine were suggesting that the process had reached crisis proportions and was leading to “the end of democracy.”
But conservatives have not been alone in their anxiety. In the wake of the Supreme Court decision that brought the 2000 presidential election to an end, and of a series of cases concerning federalism and the “sovereign” dignity of the states, many liberals have also inveighed against the judicial usurpation of legislative prerogatives and the substitution of naked politics for the restraints and procedures of law. On both the Left and the Right, some have called for limiting or even abolishing the Justices’ power to declare laws unconstitutional.
The Court’s recent rulings on affirmative action (Grutter v. Bollinger and Gratz v. Bollinger) and sodomy (Lawrence v. Texas) have imparted fresh life to these themes. Especially but not exclusively on the Right, observers have once again decried the activism of judges who ignore both constitutional precedent and the claims of elected lawmakers in order to pursue their own policy ends. The specter has been raised of, if not the end of democracy, at least the end of constitutional law.
In an effort to assess current attitudes toward the judiciary and its place in American democracy, the editors of COMMENTARY asked a group of prominent intellectuals and scholars to address the following questions:
- Have recent rulings by the Supreme Court subverted fundamental elements of our constitutional order? If so, exactly how grave is the situation, and is responsibility to be laid equally at the feet of liberal and conservative Justices?
- Controversial court decisions have been rationalized by appeals to an “emerging” democratic consensus or (as in Lawrence) to human-rights norms elsewhere in the world. Is there any legitimacy to this development? In deciding constitutional questions, are there circumstances in which the Supreme Court is justified in reaching beyond its own precedents and the Constitution itself?
- Do you see any merit in proposals to limit the power of the Court? More broadly, what (if anything) should be done to contain or roll back the imperial judiciary?
The responses, thirteen in all, are printed below in alphabetical order.
This symposium is sponsored by the Edwin Morris Gale Memorial Fund.
Robert L. Bartley
The Supreme Court’s latest rationale for affirmative action is worth quoting, illustrative as it is of the current trend in judicial activism. Here is the key passage from Justice O’Connor’s majority opinion in Grutter v. Bollinger, the case involving the University of Michigan’s law school:
[R]acial classifications, however compelling their goals, are potentially so dangerous that they may be employed no more broadly than the interest demands. Enshrining a permanent justification for racial preferences would offend this fundamental equal-protection principle. . . .
We take the law school at its word that it would “like nothing better than to find a race-neutral admissions formula” and will terminate its race-conscious admissions program as soon as practicable. . . . We expect that 25 years from now, the use of racial preferences will no longer be necessary to further the interest approved today.
From a legal standpoint, O’Connor’s stance is ludicrous. We are issuing a decision, the Court says, but no one should take it as stare decisis. In fact, we understand that the equal-protection principle enshrined in the Fourteenth Amendment prohibits the policies we hereby sanction. But because of our view of the current social situation, this part of the Constitution is hereby suspended. We hope that the social situation will change, and when it does we hope some future Court takes the opposite view.
If I had invested my life in the law, I would find this, if not horrifying, at least disgusting. It makes a mockery of the highest legal ideals: that the law should be rooted in settled and predictable principles; that courts should follow established precedent; that, if laws need to be changed in light of new circumstances, courts should defer to elected representatives. Surely, even for those of us who see the law as one of many different institutions surrounding and supporting our democracy, society suffers when the courts deviate from these principles.
Yet, in the broader context of American history, the attitudes of the Court deserve more sympathy, even from those of us who disagree with many of its rulings. This context revolves around both the issue of race, the great stain on American history, and the specific genesis of the recent spate of judicial activism.
As recently as the mid-20th century, a hundred years after the war to eradicate the stain, an American black coming of age could by no means expect to be judged by individual merit; exceptional individuals aside, his race would be a handicap. Although society had by then made crucial efforts to curtail discrimination, those efforts had been frustrated by abuse of one of our institutions, the right of unlimited debate in the U.S. Senate. There, the filibuster had become not a right of debate but a legislative veto.
With the legislative route closed, leadership fell to the judicial branch, which acted decisively in Brown v. Board of Education (1954). But with Brown, the Warren Court also jumped feet first into social policy, and later history shows why legislative solutions are much to be preferred. In the school-busing cases, judges professing to speak from categorical principles found themselves overriding the educational mission of schools and imposing unrealistic costs on children, not only white but black.
A decade later, the legislative route having been cleared, the Voting Rights Act of 1965 removed a last barrier to equal status for blacks. The remaining issue was how long the catching-up process would take. With lingering disadvantages in educational opportunity, socio-economic attainments, role-modeling, and so forth, some kind of jump-start seemed reasonable. This was the rationale for affirmative action, an attempt to help disadvantaged blacks without establishing racial quotas or allowing group rights to override individual merit.
Of course, we all know how it has worked out. De-facto quotas have become the norm, talented blacks have come to feel that affirmative action taints their accomplishments, and political constituencies have encrusted themselves. Still, in the context sketched above, O’Connor’s opinion in Grutter seems much more reasonable. A jump-start, yes, but not a permanent division along racial lines. The issue is when to back away. She says 25 years hence; my own instinct is that a good time would have been right now. In particular, I find her reliance on the good faith of the academic community touchingly naive. But I also find it hard to assert a categorical constitutional argument against affirmative action. My case against it is, instead, prudential and pragmatic.
I also recognize that courts find it difficult to be pragmatic when themselves professing to speak in categorical constitutional principles. With Brown, the Warren Court spit out the bit of judicial restraint. Rather than restrict itself to overriding the filibuster on racial issues, it scanned American society for errors it could correct. It rewarded the crusade against religion by Madalyn Murray O’Hair, overturning school prayer and setting in motion a strain of litigation that still has the Court deciding how many Santas are needed to balance a crèche. Police forces around the nation have perhaps learned to cope with Miranda v. Arizona (1966), but not without turmoil. These cases and others surely demonstrate the high price society pays for judicial overreaching.
The most famous, and most serious, case is Roe v. Wade (1973). This had essentially no constitutional footing. Worse, it gave us the culture war. Abortion foes never had their say in the political arena; ironically, Roe mobilized evangelicals who since the Scopes trial in 1925 had eschewed involvement in politics. On the other side, abortion advocates, now armed with a constitutional imprimatur, became ever more militant, repeatedly asserting that any compromise—restrictions on “partial birth abortion,” for example—would collapse their whole edifice.
Yes, then, Roe was wrongly decided. Yes, it was the result of an imperial judiciary. Yes, it is horrifying to those who believe that life begins at conception, and that any abortion is murder. Yes, it continues to inflame our politics.
And yet. Not all Americans—indeed, not close to a majority—do accept that life begins at conception. What if abortion had been, properly, left to the legislative process? I very much doubt the “at conception” definition would have been written into law. To judge by what other modern nations have done, we would have ended up with something that looks a lot like Roe‘s three-trimester division.
Besides, the Supreme Court has been wrong before. In his eloquent dissent in Planned Parenthood v. Casey (1992), Justice Scalia cited the Dred Scott decision and the Court’s opposition to the New Deal. In those cases, as in Roe, the Court no doubt sacrificed some of its legitimacy, and no doubt the high ideals of the law were besmirched. But somehow the Republic survived.
The key is, as that great legal mind Mr. Dooley remarked, “th’ supreme coort follows th’ iliction returns.” Despite continuing frustration, we are already beyond the activist assertiveness of the Warren era. Whatever it lacks in legal logic, the majority opinion in Grutter is moderate in tone. It tries to put limits, in logic and time, on its own activism (although we cannot be sure those limits will hold).
In the same way, it is hard to get too excited about court decisions overturning sodomy laws. Those laws are not really meant to be enforced anyway, and as Scalia noted in his dissent in Lawrence v. Texas, many conservatives, too, would oppose them in a legislative forum. A much more serious problem is a potential Supreme Court decision upholding interstate recognition of gay marriages. Polls suggest that such a decision would cut divisively against the grain of American opinion.
The answer to this, I think, lies not in specific limitations on the judicial branch but in the general political process. By the nature of lifetime tenure, the judiciary is often the last branch to register a new era in society’s opinion. That is why the judiciary is the last hope of the liberal consensus that prevailed from FDR through JFK. A more conservative consensus has dominated presidential elections for a generation, and the Republican party has now taken a legislative majority.
That the judiciary is now the battleground is evidenced by the filibusters against judges nominated by the Bush administration. Republicans control the Senate with 51 votes, still far from the 60 needed to close debate. However, further elections loom. In the 2004 contests, nineteen Democratic seats are at stake, ten of them in “red” states carried by President Bush.
Further Republican gains, even if not providing all of 60 votes, would certainly make filibusters more difficult and potentially expensive for Democrats. A conservative shift in the judiciary would become easier. At least as important, the gains would be heard by present judges, as the 1936 elections created the “switch in time that saved nine.”
In elections nothing is certain, and mine is not a guaranteed answer to the threat of an imperial judiciary. But it is a far more plausible answer, I think, than a crusade against the Court for offenses that most Americans would consider perhaps mistakes but not intolerable ones. Critics intent on changing the judiciary should put their efforts into changing the balance of public opinion and of political power.
William J. Bennett
Who will govern the governors? This question, as old as Plato, as old as the founders, was raised throughout the 1960’s on college campuses and in learned discussions about the supposed failures and pitfalls of representative government. At the time, the favored answer was: the Supreme Court—and most particularly the Supreme Court as then constituted under Chief Justice Earl Warren.
That Court, despite a number of meritorious rulings, was in fact busy crafting new law—which is to say, doing the job of the legislature. The First, Fourth, Fifth, Sixth, Eighth, and Fourteenth Amendments became battering rams against various forms of state and federal law. And when the words in those Amendments simply would not suffice for the task at hand—could not be molded to fit desired political-legal outcomes—new rights were created by Justices who, regarding themselves as in effect a new Constitutional Convention, inserted into the existing document words and concepts like “privacy” and “expression” (as in “right to privacy” or “freedom of expression”) that were somehow missing from the original.
Charles Kesler, a scholar of the founding, recently quipped that the worst place to study the Constitution these days is in law school. It is more than a quip: law schools teach constitutional analysis, by which is meant what sitting Supreme Court Justices have written about the Constitution. The Federalist Papers, the debates at the real and original Constitutional Convention, other documents from the era of the founding—these are not seriously studied. No wonder, then, that our lawyers and judges—ignorant of what Jay, Madison, and Hamilton said, wrote, and meant—are under the impression that constitutional law changes with the times and the persons empowered to pronounce on it. No wonder, too, that Supreme Court Justices can quite preposterously imagine a basis in constitutional law for asserting that, for example, “At the heart of liberty is the right to define one’s own concept of existence, of meaning, of the universe, and of the mystery of human life.”
That sentence is constitutional law, written by three sitting Justices, two appointed by President Ronald Reagan, one appointed by President George H.W. Bush. And that sentence, from Planned Parenthood v. Casey (1992), has now evolved from the aberration it seemed when first uttered into an ingrained element of our jurisprudence, its bedrock authority invoked just this past term to buttress the majority opinion in Lawrence v. Texas.
All this needs to be kept in mind when we hear or read about the damage done by the Warren Court. It especially needs to be kept in mind when we hear liberals complaining about today’s “Republican” or “conservative” Court. Yes, Nixon, Ford, Reagan, and Bush appointees make up seven of the nine Justices on the Court; but we have to thank this same Court for finding unconstitutional a Nebraska ban on partial-birth abortion, for approving racial preferences in a public-university law school, and for discovering in Lawrence v. Texas a constitutional right to sodomy.
The problem may reside in only two or three of those seven appointees. But the solution, clearly, does not rest only in electing Presidents or congressmen of one party as opposed to another. Altogether, such attempts to restrain the “imperial judiciary” have not fared well in the recent past and cannot be counted on to do better in the future. No doubt that is why, over the past two years, many in the conservative movement—or “family movement,” as some have taken to calling it—have been thinking about a constitutional amendment to preserve the concept of marriage as the union of a man and a woman; their fears have become all the more urgent in light of the prospect, now brought closer by Lawrence v. Texas, that the judiciary will next find a constitutional right to gay marriage.
It is somewhat disorienting to think that we have been driven to this: spending our time and energy in order to save, protect, and preserve the family, of all seemingly invincible institutions, from a branch of our own government. But that is precisely what many are nobly trying to do. Will they succeed? The amendment process is both slow and uncertain, and I, for one, cannot think of a single victory that anybody, conservative or liberal, has won in over forty years by resorting to it. Controversial amendments are simply too easy to stop.
The legislative route seems no more promising. Whenever Congress has tried to reassert its equal status in the constitutional order, passing legislation in order to mitigate what it deems to be judicial error, the Court has managed to reassert its arrogated supremacy by ruling as it will do once such legislation is brought before it. Court-stripping efforts invoking Article III of the Constitution seem to me similarly will-o’-the-wisp.
The problem is deep; it has been long in gestation; and the solution cannot be a quick fix. Recently I took part in a conversation with two well-known and well-versed professors of constitutional law. The topic: how to respond to wrong-headed court rulings. Both of them, having been trained in the day when law schools were good places to learn about the Constitution, were partial to an approach that would employ delicately crafted, black-letter legal language invoking precedent and original intent. In so doing, they were, in effect, proposing to compete in one of today’s professional wrestling matches while adhering to Marquis of Queensbury rules—a lost cause.
Until the appointment of Clarence Thomas, few Justices were in the habit of citing the Declaration of Independence—the first “act of Union” according to Thomas Jefferson and James Madison, who recommended that it be so studied in the nation’s legal academies. As far as I am aware, no elite law school today teaches the Declaration of Independence as “organic law,” even though its understanding of equality and freedom is central to any proper appreciation of the Constitution. One way, therefore, to help restore the Declaration, and by extension the founders’ original intent, might be to encourage the establishment of academic centers like the James Madison program directed by Robert P. George at Princeton. This program, in effect a Princeton “sub-college,” is dedicated to training students in taking the founding seriously. It functions at the undergraduate level, but more such centers should be set up at as many law schools as possible.
Better education is not just a glib thought; it is the crux of the matter. If law schools were not a problem, we would not be speaking of an “imperial judiciary,” holding symposia on the subject, or thinking of ways from the Left or Right to curb it. And while we hope for and work toward better education, we can urge practical and political measures as well. The President needs to take up, with renewed vigor, the task of finding federal appointees who are well-vetted and well-versed in constitutional law—and who know how to defend themselves. When hostile public-interest groups or Senators try to make political hay out of a nomination, the debate should not be shunned but welcomed—and it should be joined not only on procedural grounds but on the merits of the nominee’s judicial philosophy. Conservatives have erred in the past by seeking easily confirmable nominees who have never taken a controversial position or written an opinion touching upon core conservative concerns. It is a mistake to shrink from a fight over a confirmation—after all, it is the Constitution we are fighting for.
As for today’s Supreme Court, critics might usefully take a lesson from Abraham Lincoln, who in the wake of the infamous Dred Scott ruling boldly asserted that when a decision is wanting in all appropriate “claims to the public confidence, it is not resistance, it is not factious, it is not even disrespectful, to treat it as not having yet quite established a settled doctrine for the country.” We need not bow to every decision that departs from precedent or constitutional logic. Such decisions may be viewed as the end of the matter for a particular case; they need not be viewed as the end of the matter for all time. Keeping an eye on the larger picture, we can move to redress their effects—by means of legislative responses where possible, in extreme circumstances even by amendment, but above all by going to the root of the problem and beginning seriously and steadily to undertake the long, arduous task of legal reeducation, preeminently by redirecting the attention of professors and students alike to the Constitution and the intentions of its framers. Only by succeeding at that task will we finally be able to say once again, with Hamilton, that the judiciary is indeed the branch of government “least dangerous to the political rights of the Constitution.”
It is crucial to recall, in this season of conservative discontent with the U.S. Supreme Court, that two years ago liberals were enraged by (and still fume over and plot revenge for) what they regarded as the Court’s unpardonable intervention in the 2000 Florida election controversy. Grutter and Lawrence have in common with Bush v. Gore that all are hard cases, in which respectable constitutional goods can be found on both sides of the question. At the moment, the Court is bearing the brunt of the strain that these hard cases have imposed on our constitutional order.
Bush v. Gore stemmed from a freakishly close presidential election that turned on legal challenges rooted in real silences, gaps, and ambiguities in Florida law, federal law, and the Constitution. The recent cases arise out of controversies concerning core constitutional issues—the boundaries of personal freedom and the contours of equality under the law. Although the constitutional order is holding up, a tendency to equate law with morality and politics, evident in Grutter and Lawrence and even more so in the writings of the legal scholars who have assumed the role of explaining the Supreme Court to the public, is increasing the strain.
Bush v. Gore ignited a firestorm of scholarly criticism. Almost immediately after the Court’s December 12, 2000 decision, the legal academy seemed to rise up and, almost with one voice, to denounce the 5-4 decision as lawless, undemocratic, and poisoned by conservative partisanship. To be sure, emotions were running high: the Court had never before decided a case that had the foreseeable consequence of producing a winner in a disputed presidential election, and the Court had reasons, rooted in separation-of-powers principles, for refraining from taking the case until the political process had been given more time to resolve the controversy. Nevertheless, the majority opinion identified a well-accepted constitutional principle: the equal-protection clause of the Fourteenth Amendment prohibits states from debasing or diluting citizens’ votes, or subjecting their votes to arbitrary and disparate treatment. The Court reasonably found that, in a number of ways, the recount ordered by the Florida Supreme Court violated that principle, by unconstitutionally applying either varying rules and standards or no rules and standards to the question of which ballots should be recounted as well as to the question of what was to count as a legal vote.
But, although the Court’s opinion was certainly open to criticism, few scholarly critics paused long enough in their recriminations even to state accurately the Court’s holding. With apparently good conscience, prominent scholars adopted the pose of legal analysis only to spurn legal analysis. The scholarly war against Bush v. Gore marked a culmination, decades in the making, of the politicization of legal scholarship.
Of course there is an irreducible element of moral and political judgment in adjudication. The serious question is whether judges introduce moral and political judgments in the effort to resolve the law’s silences, gaps, and ambiguities or instead invoke them as part of an end run around the law.
Unfortunately, the reasoning by which the Court justified its decision in Grutter to uphold the University of Michigan law school’s affirmative-action program appears more like an end run around the law. It is not that racial diversity is incapable of contributing to intellectual diversity, which is the form of diversity that law schools rightly pursue. Nor is it that we lack an interest as a society in having minority students attend elite law schools and then go on to occupy public positions of prominence and power. Nor again is it that the Constitution absolutely forbids states to use racial classifications. Rather, the problem is that the Court’s precedents in the area of equal-protection law required Justice O’Connor, writing for the majority, to subject to “strict scrutiny” the law school’s controversial claims both about the benefits of racial diversity in the classroom and about the actual operations of its admissions process. This, however, O’Connor declined to do.
Strict scrutiny is the most severe and searching form of equal-protection review. It is triggered when states classify on the basis of race; state actions and state-funded actions rarely withstand it. Although she purported to apply strict scrutiny, O’Connor in fact took the law school at its word, essentially accepting its characterization both of the benefits of racial diversity and of how its admissions office achieved those benefits, while refusing to take seriously the criticisms Grutter’s lawyers made of both. O’Connor thereby transformed the most severe and searching form of equal-protection review into the most deferential form. Whatever its political consequences, Grutter does not represent a creative extension of the law of equal protection but a disregard of its imperatives.
Similarly, in Lawrence, Justice Kennedy’s majority opinion seemed to follow the logic of his moral and political judgments rather than the logic of the law. True, many of these moral and political judgments have a strong basis in our fundamental beliefs about liberty. Our constitutional culture does link liberty to privacy; it does stress the sanctity of the home; it does place a premium on consent, especially the consent of adults to actions that take place in the privacy of their homes and that do not cause physical harm; and it increasingly recognizes that the intimate lives of gays and lesbians are not the law’s business. But as a matter of constitutional law, the Court’s due-process precedents required Justice Kennedy to subject the Texas statute prohibiting homosexual sodomy to “rational scrutiny.” This, however, Justice Kennedy declined to do.
Rational scrutiny is not strict scrutiny; it is, rather, the most deferential form of due-process review. It can be applied, according to the Court’s case law, unless the challenged regulation implicates a “fundamental right or liberty”—that is, a right that is “deeply rooted in this nation’s history and traditions” and “implicit in the concept of ordered liberty.” Since rational scrutiny requires only some conceivable rationale, state actions (like the Texas statute outlawing homosexual sodomy) almost always pass it. Yet while refraining from declaring the right at issue in Lawrence to be fundamental, Kennedy subjected the Texas sodomy statute to a searching review, and on that basis found it unconstitutional. Like O’Connor’s majority opinion in Grutter, Kennedy’s majority opinion in Lawrence appears to bow to the Court’s precedents while creating precedents afresh.
Constitutional law is a demanding discipline. Because it involves the application of rigorous reasoning to materials—constitutional text, structure, history, and case law—that are in many instances susceptible of competing readings, that frequently touch on our most cherished principles, and that cannot avoid debatable empirical judgments, it gives rise to hard cases involving the day’s most divisive issues. Nevertheless, some readings of constitutional law are careless, extravagant, or unsound. When Justices of the Supreme Court commit such readings in their opinions, even for good causes, the danger in the short term is that they thereby encourage the suspicion among legal scholars that when Justices do follow precedent, it is only because they find it expedient to do so.
At the moment, and despite the best (that is, the worst) efforts of legal scholars in the wake of Bush v. Gore, the Court still enjoys an honored place in public opinion. But given the formal role assigned the judiciary in our system and the informal role performed by legal scholars, one could reasonably worry that down the road, the proliferating propensity among the latter to equate law with morality and politics will impose a strain of truly dangerous proportions on our constitutional order.
Robert H. Bork
The question of whether “recent rulings by the Supreme Court [have] subverted elements of our constitutional order” has by now acquired a quaint, antique ring. Even to ask the question seems almost a piece of drollery. The Supreme Court has been beavering away at the underpinnings of the Constitution for 50 years; before that, its acts of subversion were less frequent and served a different ideology but were no less real. Prior to 1937, when FDR remade the Court, conservative Justices worked occasional miracles of transubstantiation with the Bill of Rights. The classic examples are, of course, Dred Scott (1857) and Lochner (1905), the one creating a right to hold slaves under the due-process clause of the Fourteenth Amendment, the other creating a right to make contracts under the same clause. Neither decision had any support in the actual document.
Judicial invention of new and previously unheard-of rights accelerated over the past half-century and has now reached warp speed. It is not just Grutter‘s permission to discriminate against white males and Lawrence‘s creation of a right to homosexual sodomy. The Court has created rights to televised sexual acts and computer-simulated child pornography and, in direct contradiction of the historical evidence, has continued its almost frenzied hostility to religion. The list of activist decisions constitutionalizing the Left-liberal cultural agenda is lengthy.
The term “activism”—the reaching of results that cannot plausibly be related to the Constitution—is used to describe this process and to serve as invective by both sides in arguments between liberals and conservatives. Though there have been conservative activists, it seems to me undeniable that the activism of the Court from the Warren era on has been overwhelmingly liberal. It is interesting that the dominant theme of the Warren Court was equality, hence its heavy reliance upon the equal-protection clause of the Fourteenth Amendment, while today’s Court stresses individualism and emphasizes the liberty component of the same Amendment’s due-process clause. Though both are misuses, the shift does correspond to the movement of Left-liberalism from concern with economic inequality to absorption with “lifestyle” freedoms.
One result of rampant activism is the decline in the intellectual quality of the Court’s opinions. Grutter and Gratz accepted the transparent false-hoods of the University of Michigan about the need for racial diversity in the student body to provide a quality education, abandoning the constitutional practice of strict scrutiny of racial classifications and utterly ignoring the flat prohibition of racial discrimination in the 1964 Civil Rights Act. Lawrence said little more than that attitudes toward homosexual sodomy have changed in the past 50 years and, citing a decision of the European Court of Human Rights, that Europe now recognizes a right to engage in it.
There is an increasing tendency for the Court to rely upon such decisions of foreign courts in creating the constitutional law of the United States. That, to put it gently, is flabbergasting. What the decisions of foreign courts have to do with what the framers and ratifiers of the U.S. Constitution understood themselves to be doing is not explained, and cannot be explained. The result of this trend, if it continues, as it seems likely to do, will be a homogenized international constitutional law reflecting the trendy views of liberal elites here and abroad.
How grave is the situation? Though ludicrous, it is extremely serious. In these and other judgments, the Court is steadily shrinking the area of self-government without any legitimate authority to do so, in the Constitution or elsewhere. In the process, it is revising the moral and cultural life of the nation. The constitutional law it is producing might as well be written by the ACLU.
That fact alone should make it clear that conservatives do not bear the responsibility; they spend their energies writing splenetic dissents and dyspeptic comments like this one. True, liberals grow apoplectic over Bush v. Gore, which they see as their one chance to convict conservatives of activism. Unfortunately for that tactic, the concurring opinion by Chief Justice Rehnquist (joined by Justices Scalia and Thomas) was solidly based on the Constitution and a federal statute, and two members of the liberal bloc on the Court agreed with the majority that the judgment of the Florida Supreme Court in Gore’s favor had to be reversed. To repeat, activism in our time is a liberal phenomenon.
If judicial activism, which means ruling contrary to the Constitution, is improper, which both sides concede, at least rhetorically, then there is no justification for any court’s reaching beyond the Constitution. Ours is a democratic polity, and the Constitution provides the sole authority any judge has for nullifying democratic choices. When there is a felt need for new law, the legislature is capable of providing it. Reaching beyond constitutional precedents, however, is another matter altogether. Judicial misinterpretation of a statute can be rectified by the legislature, but only a court can overrule an erroneous constitutional decision. Correcting a constitutional error is not judicial activism.
The framers made a fundamental mistake by creating a body of lawyers with uncheckable power. That mistake was understandable, because they had no reason to know what courts could become, but the result is that we have an untethered power that overrides democratic governance whenever the mood strikes it. There is no obvious cure for the situation. Congress’s power to make exceptions to the Supreme Court’s jurisdiction, given in Article III, provides no solution. Jurisdiction would then lodge in the state courts under Article IV and could not be removed. Many state courts have become as unrestrained and trendy as the federal courts. In short, there appears to be no way to contain the imperial judiciary.
There was a time when it was said that the Court’s improper expansion of its powers would be held in check by informed criticism from the legal profession. To the contrary, much of the profession, seeing the Court as its lever of power, urges it on to further adventures. In any case, the Court is impervious to criticism. Its attitude is that of the Arab saying, “The dogs bark but the caravan moves on.”
Few subjects engender more results-oriented hypocrisy than the role of the judiciary in a democracy and the related issues of “states’ rights” and “original intent.”
Conservatives rail against elitist courts that overrule majoritarian decisions only when these conservatives agree with the political majority and disapprove of the actions of the unelected “legislators in robes.” Liberals condemn the activism of judges who strike down legislation of which they approve, or who substitute their own politics for those of the political branches of the government.
Similarly, critics invoke states’ rights when they approve of what the states are doing and disapprove of federal intervention, but these same critics are often quick to praise the federal trumping of states’ rights when they agree with the federal government’s position on the substantive issue. Judges invoke original intent (or their interpretation of it) when doing so serves their ends, and ignore original intent when it contradicts their agendas.
This is an old story harking back to the earliest days of the Republic. It should come as no surprise that few politicians really care about the structural issues governing the division of power among the branches, and between the federal government and the states. Politicians get elected because of the immediate results demanded by their constituents, not the long-term structural arrangements debated by constitution-makers and political theorists.
But what is surprising is how many scholars and professors—with tenure, and with no need to pander to constituencies—likewise elevate results over structure, or, more subtly, use structural arguments to produce substantive results they prefer. Since legal scholars influence judicial decisions in many ways—by providing intellectual ammunition, by justifying and rationalizing decisions of which they approve, by themselves becoming judges—this latter phenomenon is extremely important, and should be studied more critically than it has been.
In my experience, most—though certainly not all—legal academics do construct and support structural arguments that produce substantive results of which they personally and politically approve. When, in the early 20th century, progressive state and federal legislatures were enacting social-welfare laws that were being struck down by conservative courts, progressive academics praised judicial restraint and the “passive virtues” of such doctrines as “standing,” “case and controversy,” “mootness,” and “political question.” Decades later, when liberal courts were striking down conservative legislation—especially in the areas of abortion, religion, criminal justice, and civil rights—liberal academics praised judicial activism, while conservatives condemned it. The Supreme Court’s activist intervention in the 2000 presidential election was condemned by liberal activists who disapproved of the result, and praised by at least some conservatives who approved of the result. Now conservatives are demanding that the federal government deny to the states the power to define marriage so as to include same-sex unions, despite the historical fact that family law has traditionally been the sole province of the states.
In light of this pervasive hypocrisy on all sides, is it actually possible to devise an approach to judicial review that is not politically or personally results-oriented? John Hart Ely, in his classic book Democracy and Distrust (1980), came closest to providing a justification for politically neutral judicial checks on popular democratic actions. The proper function of the judiciary, according to Ely’s theory, is to protect the integrity of the process by (1) clearing away the obstacles to political change erected by the majority and (2) facilitating the representation of minorities in the face of prejudice in the political marketplace. By assigning to the courts the role of opening the channels of democracy and protecting minority groups that, as a result of a “malfunctioning” in the political process, are incapable of protecting themselves, Ely confined issues of constitutional interpretation to questions of participation—rather than to the “substantive merits” of the political choices under attack.
What about original intent? There is a wonderfully resonant story in the Talmud that casts an interesting light on this contentious issue. Rabbi Eliezer was engaged in an acrimonious dispute with his fellow sages about a point of law. Eliezer “brought forward every imaginable argument, but [the others] did not accept them.” Finally, in desperation, he invoked the original intent of the Author of the Torah. Eliezer implored: “If the law agrees with me, let it be proved from heaven!” Whereupon, a heavenly voice cried: “Why do you dispute with Rabbi Eliezer, seeing that . . . the law agrees with him?” But the other rabbis rebuked God for interfering in this human dispute: “You have already written the Torah [and] we pay no heed to a heavenly voice.” The message was clear. God’s children were telling their Father: it is our job to give meaning to the Torah that You gave us. You gave us a document to interpret, and a methodology for interpreting it. Now leave us alone. And, according to the Talmud, God agreed, laughing with joy: “My . . . children have defeated Me in argument.”
No single person drafted the Constitution. Our contemporary rabbis in robes cannot call for a heavenly voice to confirm the correctness of their constructions of such terms as due process, equal protection, freedom of speech, or cruel and unusual punishment. But I wonder whether Jefferson, Madison, and Hamilton would not respond to a contemporary Eliezer’s call for authoritative interpretation by declining to interfere and by saying: “It is a constitution you must expound. We wrote its phrases long ago in a different era. Pay no attention to those who would invoke voices of certainty from the grave or the heavens.”
True, to ignore the intent of the framers would be as simpleminded as pretending to know with certainty the singular meaning of language that was probably selected, at least in part, for its capacity for redefinition over time. The paradox of the American system of judicial review—with its concomitant debate over original intent—is that you cannot live comfortably with it or without it.
An America without judicial review would be unthinkable. Although most nations, even democratic ones with traditions of liberty, do survive in its absence, the power of our courts to declare unconstitutional the actions of the other branches has become an indispensable aspect of our sovereignty.
Taken to an extreme, the power of judicial review can be transformed into an undemocratic veto by an appointed and unaccountable aristocracy. A judiciary whose interpretations of such broad concepts as due process and equal protection of the laws are unrooted in some broad historical purpose can quickly become a super-legislature, simply voting to overrule inferior legislatures and executives. But, on the other hand, a judiciary confined by the narrow visions of a past generation is a judiciary incapable of adapting to new dangers unforeseen by the framers.
There is no perfect solution to the paradox of judicial review, the conundrum of states’ rights in a federal republic, or the mystery of original intent. No one—no scholar, justice, lawyer, or layman—has discerned or devised the perfect rationale, limiting principle, or methodology for judicial review. Even Ely’s process-oriented approach raises nearly as many close questions as it answers. This imperfection has become an invitation to hypocrisy on all sides.
Lino A. Graglia
Recent rulings by the Supreme Court are instances of judicial activism that do indeed subvert the system of government contemplated by the Constitution. But this subversion has been going on for a long time.
The three basic principles of our system of government are democracy (or republicanism—i.e., popular self-government through elected representatives; federalism—i.e., decentralized power, with most social-policy decisions made at the state rather than the national level; and separation of powers. Judicial activism—which for practical purposes may be defined as rulings of unconstitutionality not required by the Constitution—amounts to rule by judges and ultimately the Supreme Court: a committee of nine lawyers, unelected and holding office for life, making policy for the nation as a whole from Washington, D.C. The resulting system of government, totally undemocratic, totally centralized, and with the judiciary performing the legislative function, is in violation of all three constitutional principles.
What has subverted our constitutional order is not just the Court’s most recent rulings but the unprecedented power of judges to invalidate as unconstitutional the acts of other officials of government. The consequences could hardly be more grave. The first significant exercise of the power of judicial review against a federal statute was the 1856 Dred Scott decision; by invalidating a political resolution of the slavery issue, this decision seemed to make the Civil War inevitable. More recent exercises of the power of judicial review have given us a system of criminal justice in which the guilt or innocence of the accused is often the least relevant consideration; busing of children in an attempt to increase racial “balance” in schools; the conversion of the abortion issue from one that was being dealt with on a state-by-state basis, with abortion generally being liberalized, into an extremely divisive topic of national moment.
The central fact about constitutional law is that it has very little to do with the Constitution, a brief document that wisely precludes very few policy choices. The great bulk of constitutional cases involve state, not federal, law. Nearly all of them purport to be based on a single provision, the Fourteenth Amendment, or rather on four words in that amendment: “due process” and “equal protection.” In fact, however, the Court’s decisions on basic issues of social policy turn not on those words at all but on the policy preferences of a majority of the Justices. Those preferences almost always mirror those of a liberal cultural elite of which the Justices are a part and, most importantly, the views of elite law-school faculties.
Virtually every one of the Court’s rulings of unconstitutionality over the past 50 years—on abortion, capital punishment, criminal procedure, busing for school racial balance, prayer in the schools, government aid to religious schools, public display of religious symbols, pornography, libel, legislative reapportionment, term limits, discrimination on the basis of sex, illegitimacy, alien status, street demonstrations, the employment of Communist-party members in schools and defense plants, vagrancy control, flag burning, and so on—have reflected the views of this same elite. In every case, the Court has invalidated the policy choice made in the ordinary political process, substituting a choice further to the political Left. Appointments to the Supreme Court and even to lower courts are now more contentious than appointments to an administrative agency or even to the Cabinet—matters of political life or death for the cultural elite—because maintaining a liberal activist judiciary is the only means of keeping policymaking out of the control of the American people.
Today’s liberal complaint—that the situation has changed under the current Rehnquist Court—is almost entirely a myth. The Rehnquist Court, constant in membership since 1994, consists of four very reliable liberals: Justices Stevens, Souter, Ginsburg, and Breyer. These four can count on being joined on most issues of basic social policy by either Justice O’Connor or Justice Kennedy or, as in Lawrence, both. Thus, the Rehnquist Court not only reaffirmed the constitutional right to abortion created in Roe v. Wade but extended it to so-called partial-birth abortions; not only refused to overrule the infamous Miranda decision but invalidated a federal statute that would have limited it; not only reaffirmed the constitutional prohibition of state-sponsored prayer in schools but extended it to prohibit a nonsectarian, student-composed invocation of the deity at a high-school graduation ceremony. It found the operation of an all-male military academy to be unconstitutional sex discrimination. In a preview of Lawrence, it overturned a provision of the Colorado constitution adopted by popular referendum that precluded special rights for homosexuals. It has now upheld race preferences and disallowed sodomy laws. If this is a conservative Court, what would a liberal Court do?
The Grutter case, while undoubtedly an important victory for liberals, must be distinguished from Lawrence and Bush v. Gore in that it upheld rather than invalidated the challenged policy choice. Actually, it is unlikely that the Fourteenth Amendment, guaranteeing certain basic civil rights to blacks, was meant to prevent a state from granting preferential treatment to blacks. But the Grutter decision is inconsistent with Brown v. Board of Education, once universally understood as prohibiting all official race discrimination; it is also inconsistent with Title VI of the 1964 Civil Rights Act, which prohibits all racial discrimination by institutions that accept federal funds. The Court’s refusals to apply the Act, first in Bakke (1978) and then in Grutter, cannot be seen as other than acts of judicial bad faith.
Liberal complaints about conservative activism by the Rehnquist Court rest almost entirely on Bush v. Gore, a series of federalism decisions, and the fear, now shown to be unfounded, that the Court would disallow race preferences. The five-to-four decision in Bush v. Gore was undoubtedly activist, but arguably consistent with the Court’s earlier interventions in the political process and justifiable as countering the liberal activism of the Florida Supreme Court. The Court’s federalism decisions, whatever their merits, are not likely to prove successful in limiting federal legislative authority.
If one accepts, along with Churchill, that democracy with all its faults is the best form of government—and even the most ardent defenders of judicial review purport to accept this—one must reject the notion that some issues of basic social policy are better decided by electorally unaccountable officials. In our system, the only rationale for unelected judges to overrule policy choices made by elected representatives is that the judges are effectuating the true will of the people as expressed in the Constitution. To accept that judges may invalidate such policy choices on other grounds is to accept not only that they may act dishonestly but, more significantly, that they are the appropriate policymakers on the issue involved. At the very least, this proposition should be openly defended, not established by a ruse. As for permitting judges to conform domestic law to foreign law, that is to abandon national sovereignty, something almost no political leader would undertake to defend. Finally, the idea of an “emerging democratic consensus” would seem to obviate any need to argue for judicial intervention in the first place.
The claim that the Court’s rulings of unconstitutionality are based on the Constitution has been patently fictional for so long as almost, one might suppose, to achieve a degree of legitimacy. Since no one can actually believe these are the commands of the Constitution, might it not be fair to assume that the Constitution has in effect been amended by popular acquiescence in the Court’s power to assign to itself the final decision on any issue it may choose? That power is, in any event, the present reality, and constitutional scholars have created a cottage industry devising ingenious theories to show that the Court’s decisions result not from indefensible policymaking but from some esoteric form of constitutional interpretation.
The available means of limiting the Court’s power—constitutional amendment, impeachment, limitation of jurisdiction—have for various reasons turned out to be more theoretical than real. Hope once lay in the making of new appointments, but the failure of ten consecutive appointments by four Republican Presidents to change the direction established by the Warren Court has shown this hope, too, to be unreliable. Rule by judges can certainly be solved by abolishing judicial review, but the real problem resides less in judicial review as such than in the Court’s reading of the Fourteenth Amendment as a text without any definite meaning. That problem could be solved either by returning the Fourteenth Amendment to its original meaning or by giving it any definite meaning, thus making it a judicially enforceable rule.
The system of checks and balances set up by the Constitution has broken down where the Supreme Court is concerned; that institution now checks but is not checked by the other branches. President Lincoln dealt with the abuse of judicial power by announcing that although he would not defy the Court’s Dred Scott decision, neither would he accept it as settling the slavery issue. Congress and the President could similarly make clear that contemporary Supreme Court rulings of unconstitutionality without basis in the Constitution deserve not respect but censure. If the political will were there, means could be found to return the country to the experiment in popular self-government in a federalist system with which we began.
Today’s judiciary really is “imperial” and, to a remarkable degree, extra-constitutional. The courts have done serious damage to the American political and social order. They therefore pose a major political problem. But this is a problem that defenders of the constitutional and political order—call us conservatives—have so far failed to deal with successfully.
The failure goes back at least three decades. By the mid-1970’s, the “imperial judiciary” was already understood to be a problem, not just on the far reaches of the political Right, or among the constitutionally fastidious, but in (more or less) mainstream circles like this journal. The federal courts were in the process of imposing a disastrous educational and social policy of forced busing all around the country, based on a claim of an amazingly broad power to make up for alleged past wrongs. Meanwhile, the Supreme Court had in 1972 outlawed the death penalty as it was then administered, and in 1973 had struck down the abortion laws of all 50 states—both startlingly extra-constitutional actions.
But this moment of opportunity to curb the courts was lost. For one thing, Gerald Ford’s appointment of John Paul Stevens in 1975 meant the conservative bloc on the Court would remain a decided minority for the near future. And the Court itself undertook something of a tactical retreat. A narrow majority of Justices refused to extend busing to the suburbs in 1975, and lower courts gradually permitted busing decrees to unwind in subsequent years. The Court backed off on the death penalty in 1976, and even began to backpedal a bit on abortion, allowing for some restrictions. The ambiguous 1978 Bakke decision on racial preferences muddied those particular waters as well. And Jimmy Carter had no opportunity to appoint any new Supreme Court Justices, which perhaps made the whole issue less pressing in the 1980 presidential campaign than would otherwise have been the case.
Still, liberal activist judges were a concern of Ronald Reagan and of his supporters, and it seemed possible in 1981 that a new assault could be launched against the imperial judiciary. But the effort was half-hearted and ineffective. Reagan used his first appointment to add Sandra Day O’Connor to the Supreme Court in 1981. The rhetorical critique of the court by Attorney General Edwin Meese in 1985-86 failed to gain political traction. And then the Supreme Court nomination of Robert H. Bork, the single most prominent critic of the imperial judiciary, went down to defeat in the Senate in 1987.
The Bork defeat remains the most important and decisive setback in the attempt to rein in the imperial judiciary over the last decades. It is not just that Bork failed to get on the Supreme Court, and that Anthony Kennedy went on in his place. It is that critics of the imperial judiciary lost, or were perceived to have lost, the public debate. Anti-Bork Senators—Democrats and Republicans alike—paid no price for opposing him. The Reagan White House showed little interest in dwelling on the defeat, or in drawing public lessons from it. The succeeding Bush administration was even happier to move on.
As was, apparently, everyone else. Attempts to overturn the Court’s flag-burning decision fizzled. For his first vacancy, George H.W. Bush nominated David Souter. Meanwhile, the successful challenges to the Court were coming from the Left. In the late 1980’s, the Court had interpreted certain provisions of the 1964 Civil Rights Act somewhat narrowly. Congress rushed to overrule the Court’s “conservative” statutory interpretations, and, after some resistance, President Bush and the great majority of Republicans acquiesced. This set the pattern for the 1990’s, when, for example, Republican governors and legislators in state after state did little or nothing to restrict the use of racial preferences by state institutions. It thus should have come as no surprise this year that the Court majority was unwilling fundamentally to take on racial preferences, when even conservative elected politicians had shied away.
Meanwhile, although Clarence Thomas (barely) made it on to the Court in 1991, his attackers, too, paid no political price for their assault on his person. Indeed, one of the few Democrats who supported Thomas, Senator Alan Dixon of Illinois, lost in a primary challenge in 1992. Bill Clinton of course won the presidency that fall, and his first Court nominee in 1993 was Ruth Bader Ginsberg, who was every bit as ideological a figure as Bork, and should have been as controversial. The failure of conservatives to raise any appreciable fuss over her nomination spoke volumes about the asymmetry of the larger fight over the courts: the Left has been far more willing to play brutal partisan politics, and since it has also controlled the law schools and the elite media, it can dominate the debate in the circles of “respectable” opinion as well.
In the next few years, the Court struck down federal term limits and the Colorado initiative limiting gay rights. Almost no Republicans succeeded, or even much tried, to make a political issue of these overrulings of democratic choices. Then, in Vermont, the state Supreme Court virtually required the legislature to institute civil unions for homosexuals. The legislature and the governor complied, and any backlash was muted. That governor, Howard Dean, is now the leading Democratic candidate for President, and though he signed the civil-unions legislation in the dark of the night, he now boasts about it before Democratic primary audiences. Meanwhile, the White House visibly heaved a sigh of relief earlier this summer when there was no Supreme Court vacancy, because it was worried about a fight on abortion, race, gay rights—all the issues we have allowed the Court to resolve.
So is a revival of constitutionalism hopeless? Perhaps not. The current, unprecedented Democratic filibusters in the Senate of several certifiably well-qualified Bush appellate-court nominees may prove a key moment of liberal overreaching. Can the Democratic party really sustain the position that no federal judge should be confirmed if he has, as one Democratic Senator complained, “deeply held” (i.e., conservative) moral beliefs—even if the nominee’s legal qualifications are impeccable and his public record distinguished? Furthermore, it is hard to see how the courts, and the Court, can fail to be a big issue in the 2004 presidential election, with the next President likely to be in a position to make at least two appointments, perhaps more. And what if the Massachusetts Supreme Court requires the legalization of gay marriage?
The Bork defeat occurred almost a generation ago. Perhaps the momentum is about to shift. But the obstacles remain great. Since the Left believes law is just politics, it plays judicial politics more ruthlessly than do conservatives. And because the Left is so aware of the judiciary’s key role in enforcing and sustaining left-wing cultural hegemony, and in blocking even minor conservative victories, it will fight bitterly.
Still, if 2002-2003 was, as Michael Greve of AEI has put it, “the term the Constitution died,” maybe now is an opportunity for fresh thinking and bold action to revive constitutionalism. Have the results of ideological and results-oriented liberal jurisprudence—for example, requiring the removal of the Ten Commandments from a state courthouse—finally gotten so out of hand as to make possible, at last, a return to first principles? Could 2004 be the year constitutionalism is reborn? Could a Bush second term transform the judiciary in the way FDR’s second term did?
Not likely. But, perhaps, possible.
I am appalled by a number of recent Supreme Court decisions that adopt an almost antebellum view of “state sovereignty” and limit congressional power to protect vulnerable minorities (such as the disabled). And, of course, there is Bush v. Gore, ably criticized in these pages by Gary Rosen (November 2001). Nonetheless, I am unwilling to accuse the current Court majority of “subverting” the Constitution. I was outraged during the 1980’s by the savagery of some of the criticisms directed at Justice William Brennan. Going well beyond disagreement, some critics implied that he self-consciously subordinated his obligation of constitutional fidelity to his desire to promote his liberal views. This was scandalously unfair. It is equally unfair to level similar accusations at conservative Justices with whom I often disagree. (Bush v. Gore may be an exception.)
There are various legitimate approaches to constitutional interpretation, especially if legitimacy is defined by the actual performance practices of well-trained lawyers and judges, and the current majority’s arguments certainly meet this standard. Honorable people can in good faith disagree, even vehemently, on what the Constitution means. One can oppose a point of view—and, indeed, advocate that the Senate refuse to confirm nominees with objectionable points of view—without suggesting that adherents of such views are “subversive.”
What most angers me about the current Court—and what is most genuinely “innovative”—is its claim to enjoy a monopoly of constitutional wisdom. In my book Constitutional Faith, I distinguish between “catholic” and “protestant” approaches to institutional constitutionalism. A catholic approach emphasizes the hierarchical authority of a given institution. A protestant (or “talmudic”) approach, by contrast, recognizes the existence of multiple interpreters, with constitutional doctrine at any given time being the outcome of a complex conversation among more-or-less equal interpretive partners (and adversaries).
The current Court is “papalist” in the extreme. The late Alexander M. Bickel once spoke of the Court as leading a “vital national seminar,” but our current seminar increasingly consists of a judicial monologue, with grades dependent on agreement with the Court’s own views. Today’s majority displays a barely concealed contempt for the possibility that Congress has some role to play in determining the meaning of the scarcely self-evident words of the Fourteenth Amendment.
Although the claim of exclusive judicial authority has been advanced most insistently in the past decade, it can be traced to a number of important Warren Court cases, including Cooper v. Aaron (the Little Rock school case) and Nixon v. U.S. (the tapes case). There is certainly responsibility to be assigned both to liberal and conservative Justices and, just as relevantly, to their respective academic acolytes.
Both conservatives and liberals are devoted to robust, albeit conflicting, notions of constitutional meaning vigorously enforceable by courts against legislative opposition. No Justices on the current court and relatively few academics adopt the notion of constitutional minimalism (sometimes called “judicial restraint”) identified with Felix Frankfurter (and even Frankfurter, of course, supported the Warren Court’s signature decision in Brown v. Board of Education and signed the opinion in Cooper v. Aaron). A devotee of such restraint might well have upheld both of the Michigan affirmative-action programs and the egregious Texas sodomy law, on the grounds that both policies represent the outcome of relatively democratic political processes and that the Constitution does not speak with sufficient clarity to justify judicial intervention. Not a single Justice adopted such a position; conservative activists would have eliminated affirmative action, even as more liberal activists were unwilling to countenance the legal validity of the Texas statute.
It should go without saying that there are “circumstances in which the Supreme Court is justified in reaching beyond its own precedents.” The most obvious such circumstance is when one believes that the earlier decision was mistaken. Whatever the importance of precedent, no American, liberal or conservative, has ever viewed it as the end-all and be-all of legal argument.
What makes precedent truly interesting as a form of legal argument is that, as Jeremy Bentham noted almost two centuries ago, it seemingly requires adherence to what one believes to be stupid, counterproductive, or otherwise mistaken decisions. If, after all, one admires the earlier decisions, one is not following precedent; one is simply congratulating the earlier Court for having gotten it right. What is truly irrational is to feel tightly bound by precedents, even if one accepts the view that one ought not overrule them lightly.
It is not clear to me what “reaching beyond . . . the Constitution itself” means. If this suggests that legal argument should be conducted entirely within the “four corners” of the text of the Constitution, then there is the obvious embarrassment that well within these corners is language (in both the Ninth and Fourteenth Amendments) directing an interpreter’s attention to unenumerated rights, including the “privileges or immunities” of United States citizenship. If one ignores this language, one is not remaining “within” the Constitution, one is reducing it.
This language is, to be sure, highly indeterminate. The indeterminacy might counsel certain kinds of judicial modesty, but that is an institutionalist argument; it does not signify that the constitutional interpreter as such need not be concerned with the meaning of such terms as “privileges or immunities.” ( To return to my first answer, one of the most objectionable things about the current majority is its impatience with the very idea that Congress might have a useful role to play in providing content to such terms, or to the equally mysterious command of “equal protection of the laws.”)
A last word on going “beyond the Constitution”: Jefferson and Lincoln are only the most prominent Presidents who have in effect defended going beyond the Constitution when great national exigencies demanded it (just as the Philadelphia Convention clearly went beyond the Articles of Confederation in simply ignoring the command that any amendment be by unanimous consent of all of the states). Indeed, no question is more central to the very enterprise of “constitutionalism.”
This is seen most dramatically in contemporary debates about the treatment of American citizens (and others) who have been declared, by little more than fiat, “illegal combatants” and, therefore, deprived of basic features of traditional American justice, including access to a lawyer or the elemental ability to gain evidence relevant to one’s defense. “Disappearances” similar to those that occurred in Chile and Argentina could indeed happen here, and this possibility should scare us far more than anything the Supreme Court has done (unless, of course, the Court chooses to emulate Michael Luttig of the Fourth Circuit and declare that the President is basically above constitutional constraint when fighting the war against terrorism).
As for looking to transnational and international norms in trying to discern what such notions as due process or equal protection might mean, I see no harm in this. One might well say that a clearly expressed “local norm” ought to trump an international norm, but if we view the relevant constitutional text as quite indeterminate, then why should judges not look to other courts to see how they confronted similar issues?
After all, state courts within the United States often look to other state courts—not because a California court believes itself obligated to follow New York law but rather because a properly humble judge finds it useful to see what colleagues elsewhere have done. Similarly, no one suggests that the views of the European courts should be controlling, only that they provide potential insight. We might take a leaf here from the emphasis in The Federalist on “experience” as a guide, and from James Madison’s own emphasis in his writings on comparative government. Publius would undoubtedly have found it unbearably parochial to be told that we in the United States can learn nothing from the experience of other countries. So should we.
I am skeptical of any proposals “to limit the power of the Court” if this means, for example, new jurisdictional constraints. What I do strongly support is eliminating life-tenure for federal judges. Whatever might have been its appeal in the late 18th century, there is almost nothing to be said for it today. Judicial independence could be protected by single, nonrenewable, eighteen-year terms (which would allow a new appointment to the Supreme Court every two years), followed by full pensions. Steady turnover on the Court would guarantee the arrival of new Justices likely, for better and worse, to reflect changes in public opinion, including opinion about the legitimate role of the Court itself.
No one, however, should believe that any process of appointment or jurisdictional limitations would still the often bitter debates about constitutional interpretation. The Constitution is full of what Justice Robert Jackson termed “majestic generalities,” and it is quixotic to expect a consensus on their meaning. Practically speaking, the best ways to guard against judicial imperialism are, first, to assure a greater diversity of judicial perspectives through more frequent appointments and, second, to resist the “papalist” claims of judicial authority that, if taken seriously, would identify the Court with the Constitution.
The hue and cry over our “imperial judiciary” tends to be a post-hoc phenomenon. A wide array of unrelated judicial behaviors is condemned as “imperial,” but, strangely, this condemnation comes only in the wake of very particular court decisions, the results of which are unsatisfying to critics. A quick search on the Internet reveals that in recent months “imperial judiciary” has been invoked to describe at least one of the following four phenomena: an unelected judiciary usurping the role of a popularly elected Congress or state legislature or both, thus subverting the will of the majority; a court decision wholly unmoored from an “originalist” or strict-constructionist interpretation of the Constitution; a court whose members go about the business of judging by advancing their personal agendas and ideologies; elitist, arrogant judges not content merely to interpret and clarify the law but instead acting as a “super-legislature,” making up new law from the bench.
While any one of these sins is enough to get the Jeremiahs steamed up, it is rare to see all four at play in any one case. Indeed, Roe v. Wade notwithstanding, the most egregious example of a jurisprudential grand slam—i.e., a court disregarding the will of the majority and the Constitution, voting personal ideologies and inventing new law—was the Supreme Court majority opinion in Bush v. Gore.
And while Justice Antonin Scalia famously wrote, in his furious dissent in the 1992 abortion case of Planned Parenthood v. Casey, that “the imperial judiciary lives,” he was excoriating his brethren not for any of the above evils but for their unpardonable sin of adhering to stare decisis: the doctrine that cautions courts to avoid overturning established legal precedent. In short, in Casey he used the phrase not to denounce judicial activism but, on the contrary, to criticize jurisprudential restraint.
The real truth is that most courts engage in most of the four named behaviors most of the time. Anyone who tells you differently is either an idealist or Scalia. Only when the legal result is unpleasant do the words “imperial judiciary” get thrown around. A good example is the Supreme Court’s reasoning in the two Michigan affirmative-action cases, Gratz and Grutter. Conservative critics promptly complained of judicial activism: the Court, they said, was ignoring the plain meaning of the anti-discrimination amendments to reify a “diversity” justification for affirmative action. But an alternative reading is equally possible: honoring the conservative doctrine of stare decisis, a majority of the Court respectfully deferred to the will of the state of Michigan—which had found a compelling interest in continuing these programs.
Similarly with Texas v. Lawrence, in which our elite imperialist Justices were quickly characterized as having invented a right to privacy from the ether. But the decision could as readily be viewed as one in which a respectful bench enshrined a right recognized by the majority of the American populace for years now. Had the shoe been on the other foot—to use the Alan Dershowitz test—and had the Court disregarded the overwhelming preferences of the people in the name of the power of constitutional review, critics could more legitimately have cried “imperial judiciary,” just as they did when, in Romer v. Evans (1993), the Court struck down a Colorado state constitutional amendment repealing local gay-rights laws.
It is, in any case, easy to criticize only Gratz, Grutter, and Lawrence as examples of Supreme Court imperialism. Doing so, however, ignores much of the rest of the past term’s astonishingly activist jurisprudence. Take American Library Association v. United States, in which the same Court that has always favored First Amendment rights of free speech over congressional concerns about pornography in Internet cases determined that Congress could do a better job than local librarians at keeping smut out of the hands of children. Or take State Farm v. Campbell, in which Justice Kennedy set about “making law” by explicitly setting a permissible numerical range for compensatory awards for punitive damages. Or Nevada Department of Human Resources v. Hibbs, where the Court put the brakes on its own “federalism revolution” and approved Congress’s authority to enact the Family and Medical Leave Act, thus allowing state employees to sue the states.
Examples abound. Suffice it to say that, in advancing the claim that this is an imperial Supreme Court, one cannot point exclusively to the cases in which liberal results won the day. If we plan to tar and feather the Rehnquist court, let us at least agree to tar all of its civil-liberties decisions with the same brush.
There are two, related, factors behind the sweeping activism of this Rehnquist Court. Justice O’Connor is the first, in that she has become the single most important member of the court. This is so not only because she routinely casts the crucial swing vote in the majority of close cases each year—during the most recent term she cast thirteen out of the fifteen deciding votes in 5-4 cases—but also because she is the architect of this Court’s decidedly unrestrained form of judicial restraint.
O’Connor has become famous for her tendency to decide cases narrowly, more in the manner of a biblical judge than of a Justice forging new precedent. Her case-by-case approach means that, by necessity, there is little “law” that flows from her decisions. There is only what O’Connor thinks in that specific case. Her views on abortion doctrine, church-state doctrine, the gerrymandering cases, and affirmative action all come down to whether the given facts move her. Appellate briefs are written to sway her, oral argument exists to persuade her, and, after an O’Connor opinion has been produced, confusion reigns in whole areas of the law.
By definition, this means that the role of the judiciary becomes elevated. Neither Congress nor the states can know in advance whether they will be smiled upon the next time around. O’Connor knows best. If there is indeed an imperial judiciary, she is its empress.
Feeding into this is the second factor—namely, the myth of the Rehnquist Court’s judicial restraint. Simply because a pack of Court decisions have struck a blow for federalism or states’ rights does not mean that states are more powerful than they once were. What were cases like Lopez or Morrison—federalism decisions invalidating democratically enacted acts of Congress—if not examples of the Rehnquist Court overriding the democratic process by judicial fiat? In fact, this supposedly restrained Court has overridden acts of Congress at twice the rate of the “wildly activist” Warren Court. With its new, Eleventh Amendment jurisprudence immunizing the states from the reach of federal law, the Court has now set itself up as the arbiter of when Congress really really means what it says it means.
The simple laws of zero-sum physics tell us that whenever a Court takes away rights with one hand (from the states, from Congress, or from the lower courts), and gives away rights with the other (to the states, or back to Congress, or back to the courts), it is behaving imperially. Judicial imperialism thus has nothing to do with liberal results in specific cases, and everything to do with the heavy-handed way in which the Supreme Court bosses around the other branches of government.
But is this a crisis for democracy? I doubt it. The system of checks and balances exists to protect against a too-powerful judiciary, just as it corrects for an over-powerful legislature (the fate toward which those seeking to end judicial review are barreling rather heedlessly). Nor is there anything new or radical about the way the Supreme Court is behaving right now. Alexander Hamilton, in Federalist 78, urged that limits on the legislature “can be preserved in practice no other way than through the medium of courts of justice.” Granting the courts the power of constitutional review has been with us since Marbury v. Madison was decided in 1803. As for the fact that the courts decide most of the most pressing social and political issues of the day, Alexis de Tocqueville observed in 1831 that, “there is hardly a political question in the United States which does not sooner or later turn into a judicial one.”
Indeed, judges have been deciding cases without regard for the law ever since King Solomon ruled that the contending parties should split that baby. What aspect of today’s crisis over the “imperial judiciary” is, therefore, new and terrifying? Nothing, it seems, but the nomenclature.
This magazine has distinguished itself as one of the few conservative journals to denounce judicial activism—by which I mean nothing more than the failure of courts to defer to legislatures—when committed by judges on the Right as well as the Left. From the other side of the political spectrum, my own magazine has tried to be similarly evenhanded in opposing judicial supremacism wherever it occurs. This has a long history. Beginning in the Progressive era, under such editors as Learned Hand and Felix Frankfurter, the New Republic argued consistently for deference to state and federal economic regulation. In the Warren and Burger eras, under Alexander M. Bickel’s guidance, the magazine questioned the excesses of the Court’s interventions in the culture wars—most notably in Roe v. Wade—while remaining politically pro-choice. And for the past decade, during which I have had the privilege of being legal-affairs editor, it has tried to remain true to the same restrained tradition. We have argued for judicial abstinence across the board, regardless of whether we favor the political results, from Casey v. Planned Parenthood and the right-to-die cases to Bush v. Gore and the affirmative-action cases (just how is Grutter an example of judicial activism, anyway?) to the most recent example of judicial imperiousness, Lawrence v. Texas.
What has been striking, and dispiriting, is how few consistent allies have joined the judicial-abstinence crowd in our by now quixotic campaign against the imperial judiciary. As the published reaction in COMMENTARY to Gary Rosen’s article on Bush v. Gore shows (see the March 2002 issue), most conservatives tie themselves in knots to defend judicial activism when they like the results and to denounce it when they do not. And as the reaction to Lawrence and, earlier, to Roe has shown, liberals have been no less selective in their outrage at judicial adventurousness. In short, there does not seem to be a large constituency—neither on the courts, nor in the political arena, nor among law professors—for principled abstinence, and for this unhappy but unsurprising state of affairs both liberals and conservatives deserve more or less equal blame.
Has the Supreme Court’s bipartisan embrace of judicial supremacism “subverted the constitutional order,” the editors ask? This formulation, I think, is a little too extreme. As a descriptive matter, the Supreme Court throughout its history has tended to follow rather than to challenge the broad currents of national opinion. Even during the Warren era, most of the Court’s interventions—most notably in the school-desegregation, school-prayer, obscenity, and sexual-discrimination cases—were popular with the country as a whole and resisted only briefly by Southern outliers who soon admitted defeat. It was not until Congress committed itself to fighting discrimination that meaningful desegregation and sexual integration occurred. The Courts, in other words, have a limited ability to precipitate social change, even when they presume to lead rather than follow.
The Rehnquist Court has been similarly canny both in picking and in conducting its fights, which is why the majority of its decisions have been popular with the country as a whole. When the Court has rooted its most controversial decisions in relatively solid constitutional arguments—as in the school-prayer, obscenity, and anti-miscegenation cases—the losers could understand and eventually accept the constitutional principles at stake, even if they disagreed with particular applications. As for those rulings—from the decision to uphold Roe to the decision to uphold Miranda—defended on thin constitutional grounds, these failed to inspire a national political backlash because they reached results that national majorities accepted.
The two exceptions have been the original Roe decision and Lawrence, both of which rest on especially shaky constitutional grounds. In both cases, the Court reached ahead of national opinion as a whole and triggered important political backlashes. But both Roe and Lawrence also confirm that the courts have limited ability actually to subvert the constitutional order. When a deeply felt current of popular opinion feels thwarted, it can always try to force a judicial retreat.
The conservative counterreaction to Roe ultimately failed in its attempt to force such a retreat because, by 1992, a majority of the country had come to accept the moderate compromise that Roe represented—namely, that early-term abortions had to be protected and late-term abortions could be restricted. The reaction to Lawrence may turn out to be similarly mixed. Although there is no national constituency today for reviving sodomy laws, the expansiveness of Justice Kennedy’s majority opinion, combined with expected lower-court decisions recognizing a right to gay marriage, may put some wind behind the sails of a constitutional amendment to define marriage as the union of a man and a woman. The fate of that amendment, in the end, will be determined not by the courts but by how strongly the American people feel about the issue.
All of this is to say that, ultimately, national majorities will have their way, and the Court cannot obstruct them for very long, regardless of whether or not it acts in a principled fashion.
Was the Court wrong to appeal in Lawrence to an “emerging democratic consensus” in America and Europe? I certainly think so. There may be some role for recognizing a shift in tradition when the shift is overwhelming and undeniable, as in Griswold (1965), where Connecticut was the only state in the Union to forbid the use of contraceptives. Even Justice Scalia has acknowledged that state constitutional amendments might be legitimate evidence of a changing national consensus about what counts as cruel and unusual punishment. But the Court in Lawrence was remarkably freewheeling in detecting an emerging consensus from the fact that only thirteen states continue to ban sodomy today, as opposed to the 26 states that had done so in 1986.
This statistic confuses legislative and judicial repeals. In fact, only four state legislatures chose to repeal the bans during the past two decades, while eight states saw their bans struck down by state courts, often under the same expansive “privacy” reasoning that the Court failed to defend in Lawrence—hardly clear and convincing evidence of an emerging democratic consensus. Last year, in Atkins v. Virginia, the Court was similarly freewheeling when it managed to find a national consensus forbidding the execution of the mentally retarded after noting that fewer than half of the 38 states that allow capital punishment had recently passed laws forbidding the execution of the mentally retarded.
The Court’s glibness in detecting a national consensus before one has actually materialized suggests the special dangers of looking to international authorities for evidence of changing American beliefs. In pluralistic societies, there is so much local variation in opinions on important questions involving the boundaries between the public and private spheres—just compare the American, British, and German attitudes toward surveillance cameras—that abstract appeals to an international consensus threaten to airbrush away the very real differences in people’s values, making democratic evolution through the political process impossible.
As for proposals to limit the power of the Court, these, throughout American history, have tended to come from sore losers. The failed proposal by turn-of-the-20th-century Progressives to require a two-thirds vote when the Court struck down acts of Congress, and to allow supermajorities in Congress to overrule the Court, seemed like a confession of angry despair, much like similar proposals today from the other side of the spectrum. Besides, radical court-stripping reforms are generally unnecessary as well as counterproductive, as FDR learned too late. If the country cares enough about an issue, the Court will retreat on its own.
If there is no constituency for judicial abstinence, what can be done to persuade the courts to restrain themselves? In an ideal world, we could try to reconstruct a principled bipartisan consensus about the virtues of judicial deference to the legislatures. But such a consensus has existed in American history only at times when the Court has actively frustrated the efforts of a majority of the people to pass laws that are important to them. Perhaps the gay-marriage debate will precipitate a similar consensus today, but I fear that at this point, both sides are too tainted by their own opportunistic use of the courts to lay down their arms.
An army of interest groups on the Left and the Right has been mobilized to demand from the courts the victories they are unable to win politically; these groups have distorted the politics of judicial confirmation in a way that makes it highly unlikely a principled defender of abstinence would be appointed to the bench, let alone confirmed. In other words, both liberals and conservatives still expect too much out of the courts to realize that both sides might do better with fair fights in the political arena. In the meantime, enclaves like COMMENTARY and the New Republic can continue our embattled crusade. We need allies. Join us.
Cass R. Sunstein
The Supreme Court was right to uphold the affirmative-action program at the University of Michigan Law School. It was wrong to invalidate the affirmative-action program used by that university at the undergraduate level. The Court overreached in striking down the Texas sodomy law. But it would be hysterical to suggest that the Court has subverted the constitutional order, and there is no reason to take new steps to reduce its authority. Though the Rehnquist Court shows an unmistakable tendency to (mostly) right-wing activism, the situation is hardly grave.
To explain these claims, it is necessary to back up a bit.
Begin with some definitions. A decision reflects judicial restraint if it upholds practices under constitutional attack. A decision reflects judicial activism if it invalidates practices under constitutional attack. By these definitions, a decision that strikes down campaign-finance legislation counts as activist. A decision that upholds restrictions on abortion counts as restrained.
These definitions have a large advantage: neutrality. They are purely descriptive. If we define activism and restraint in these terms, we will not complain that decisions are “activist” only if they depart from our own view about how the Constitution is best interpreted.
Of course, a neutral definition, simply because of its neutrality, will not include any evaluation of what the Court has done. So let us acknowledge the possibility of unjustified activism and unjustified restraint. Unjustified activism occurs when the Court invalidates a practice that should be upheld under the best interpretation of the Constitution. Unjustified restraint occurs when the Court upholds a practice that should be invalidated under that interpretation.
Finally, some decisions of the Court are not merely unjustified; they are illegitimate. A decision is illegitimate if it is not plausibly connected with any of the usual sources of constitutional law: text, structure, history, or precedent. By this definition, some decisions show illegitimate activism, whereas other decisions show illegitimate restraint.
Activism and restraint come in both conservative and liberal forms. In its early-20th-century decisions invalidating maximum-hour and minimum-wage laws, the Court displayed a form of illegitimate conservative activism. The Warren Court displayed liberal activism, sometimes unjustified and sometimes illegitimate. Whether conservative or liberal, illegitimate activism is an extremely serious source of concern.
In Grutter v. Bollinger, the Supreme Court upheld an affirmative-action program for law-school admissions at the University of Michigan. In Gratz v. Bollinger, the Supreme Court invalidated an affirmative-action program for undergraduate admissions at the same university. In Lawrence v. Texas, the Court invalidated a Texas law forbidding sodomy between consenting adults.
The first point to notice here is that of the three opinions, Grutter is the only restrained one. It respects the prerogatives of nonjudicial institutions; it refuses to use judicial power to overturn the decisions of countless educational institutions throughout the nation. The activist decisions are Gratz and Lawrence.
The second point to notice is that, of the nine Justices on the Court, not a single one took the consistent path of restraint, which would have been to uphold the two affirmative-action programs and the Texas sodomy law. The third point to notice is that both activist decisions, Gratz and Lawrence, build on decades of judicial precedent and can be read quite narrowly. Both of them overreach, but they are hardly illegitimate.
The affirmative-action cases present the most obvious ironies. The most conservative members of the Court, Justices Scalia and Thomas, purport to be committed to “originalism” as an approach to constitutional interpretation. They seek to understand the Constitution in accordance with its original meaning. But when it comes to affirmative action, originalism apparently goes down the drain. Neither Scalia nor Thomas bothered to investigate whether the Fourteenth Amendment, as originally understood, forbids affirmative action. Their silence is deafening. In fact, careful historical studies suggest that the drafters and ratifiers of the Fourteenth Amendment were entirely comfortable with race-conscious programs designed to help African-Americans.
Perhaps the historical evidence has been misread. What is stunning is that Scalia and Thomas did not address it. Thomas’s eloquent opinion in Grutter emphasizes the view of only one historical figure: Frederick Douglass. To say the least, Douglass was not a drafter of the Fourteenth Amendment, and his views about racial equality are not a good indicator of the original meaning of the amendment.
Suppose that we reject originalism and adopt some other approach to constitutional interpretation. The problem is that it is hard to identify any approach that would justify a Supreme Court decision forbidding affirmative-action programs as they have been voluntarily adopted by countless institutions, both civilian and military, and at every level of government. The Court was right, in Grutter, to refuse to impose a principle of color-blindness that lacks any constitutional basis. Such a decision would be an extraordinary form of judicial activism, perhaps exceeding that in Roe v. Wade itself. It is ironic indeed that many conservatives have been calling for it.
In principle, there are good reasons to object to affirmative-action programs, some of which fail to promote their intended purposes and do more harm than good. But there is no good argument that courts should invalidate those programs on constitutional grounds. The question of affirmative action is under intense scrutiny at all levels of government. If the elected branches or the people want to do away with such programs, they are entitled to do so.
I do not contend that the Gratz decision was illegitimate. The majority was building, not implausibly, on its own precedents, and Gratz is a narrow ruling that does not foreclose a degree of experimentation at the national, state, and local levels. Nonetheless, the decision was wrong.
In Lawrence, the Court overruled its own 1986 decision in Bowers v. Hardwick and suggested that the Constitution’s due-process clause protects a broad right to engage in consensual sex. The Court referred to evolving social values and to international precedents. As a matter of simple policy, I celebrate the Court’s decision. The criminal prosecution of gays and lesbians has no place in a civilized society. But as a matter of constitutional law, there are serious problems.
Usually the Court is reluctant, and rightly so, to overrule its own precedents. In any case, the Court should be reluctant to use the due-process clause as a basis for invalidating legislation that intrudes on liberty or privacy. There was a narrower and more cautious ground, emphasized by Justice O’Connor, for invalidating the Texas law. The Court could have refused to decide whether sodomy laws are constitutional as such, and ruled on the basis of its recent precedents that, under the equal-protection clause, a state is not permitted to say, as Texas attempted to do, that homosexual sodomy is forbidden while heterosexual sodomy is not.
Alternatively, the Court could have pointed to an especially disturbing feature of sodomy laws. In practice, these laws have been used for rare prosecutions and as instruments for arbitrary and selective harassment by the police. As the legal scholar Alexander M. Bickel urged decades ago, the Court might strike down the use of such laws on purely procedural grounds. It was unnecessary to refer to emerging social values or to judgments in other nations, judgments of unclear relevance to interpretation of our Constitution. In practice, changing values do influence the Court; this has become a part of our constitutional tradition. But the Court should avoid contentious claims of this kind when it is able to do so.
A principled advocate of restraint could conclude that the Court should have upheld the Texas law. But I believe that it would have been best for the Court to strike it down under one of these narrower rationales. In my view, Justice O’Connor was right to say that a criminal law that punishes homosexual sodomy, while permitting heterosexual sodomy, is difficult to justify under the equal-protection clause. Does Lawrence then count as illegitimate? Does it threaten our constitutional order? This seems to me utterly implausible. While the Court overreached, it did not act lawlessly. Aside from Hardwick, the Court’s own precedents provide a great deal of support for the Court’s decision. The Lawrence decision can easily be read narrowly, as a decision to forbid the criminalization of consensual sexual activity, in a way that leaves the most difficult and disputed questions to the democratic process.
The important points are the more general ones. It is crucial to distinguish the Court’s restrained Gratz opinion, upholding the Michigan law school’s affirmative-action program, from its two activist ones in Lawrence and Grutter. In the latter two cases, the Court erred. But we should not overreact. To err is human. What I would like to emphasize here is that it is ironic, and more than a little disappointing, that so few critics of Lawrence are also critics of Grutter—and vice-versa.
Perhaps because I expected nothing better, it is hard for me to get terribly exercised over the Supreme Court’s decisions in Grutter and Gratz, the two affirmative-action cases. Yes, these decisions make little constitutional or political-philosophical sense; in fact, they are unhappy reminders that, on these questions, the Constitution now means whatever Justice O’Connor decides it means at any given moment in her perusal of the signs of the times. Yes, Grutter and Gratz further delay the long-awaited and eminently desirable day (a mere 25 years, Justice O’Connor assures us) when Americans will be judged not by race but by the content of their character and the use they make of their native abilities. And yes, to the degree that Grutter and Gratz further embed race-consciousness in American public life, they damage our political culture.
But Grutter and Gratz do not threaten the moral structure of American democracy. Lawrence v. Texas does.
To put the matter bluntly: among the issues starkly posed by the terms of the Court’s decision in Lawrence is whether religiously grounded moral argument will be permitted any place in our common deliberation over public policy. An over-reaction? I think not. Here is why.
The Court is often (and rightly) accused of making things up as it goes along. Lawrence, however, is different, for Lawrence was predictable, and those who predicted an outcome like this some years ago have been shown to be not hysterics of dubious judgment but prescient patriots. For the hard fact of the matter is that Lawrence‘s core declaration—that the state’s sole interest in sexual matters among consenting adults is to protect the unfettered expression of personal autonomy—makes eminent sense. That is, it makes eminent sense if you believe, with Justices O’Connor, Kennedy, and Souter in their joint opinion in Planned Parenthood v. Casey (1992), that “at the heart of liberty is the right to define one’s concept of existence, of meaning, of the universe, and of the mystery of human life.”
Roundly mocked as a clumsy excursion into metaphysics, this bizarre formulation has been a time bomb waiting to explode for over a decade. With Justice Kennedy’s majority opinion in Lawrence, the explosion has now taken place. For this “sweet-mystery passage” (as Justice Scalia refers to it in his Lawrence dissent) has put tens of millions of faithful Christians and Jews on notice. There is a new and jealous god in the land: the imperial autonomous Self. Those who do not swear fealty to this new god are unwelcome in the deliberations of American public life, and their views and their institutions will be under “strict scrutiny”—the assumption being that they are bigots.
As the public debate since Lawrence has already shown, the first test of the new doctrine will involve marriage law. For to judge not only by Lawrence itself but by the standards the Court now invokes for its decision-making—“emerging” democratic consensus (presumably measured by the Justices and their clerks), evolving “human-rights” jurisprudence in Europe, Canada, and elsewhere (even more imperious than the jurisprudence in our own federal courts)—there is no principled ground on which the ancient understanding of marriage as the stable union of a man and a woman can pass constitutional muster. If the supreme liberty right to be secured by the Constitution is the unfettered expression of sexual autonomy, then on what constitutional grounds are we to “limit” marriage to the stable union of a man and a woman? Why not two men? Indeed, why not any reasonably stable configuration of consenting adults, of whatever sex and number?
Lawrence will also have nasty effects in other arenas of public life. The strict-scrutiny standard means that public education will also be under grave pressure. On what constitutional grounds will a local school district now decline to present heterosexuality and homosexuality as anything other than “lifestyle choices,” of no greater consequence than choosing a Springer spaniel over a parakeet as your family’s pet? Canadian clergy already confront the possibility that preaching classical biblical sexual morality could leave them open to legal action on grounds of indulging in “hate speech,” as their refusal to conduct same-sex “marriages” could render them liable to prosecution for denying a couple its “right to be happy” (in the phrase of a Canadian federal cabinet minister). Given Lawrence‘s logic and its strict-scrutiny standard, it would be foolish to suggest that something similar is out of the question here.
Then, to go back to where I started, there is the matter of where biblically serious Jews and Christians fit into Justice Kennedy’s novus ordo seclorum. Instructive in this regard is the fate of morally and religiously serious Catholics in public life. Consider the recent experience of Alabama Attorney General Bill Pryor. Nominated to the 11th Circuit Court of Appeals, Pryor was pilloried by Democrats on the Senate Judiciary Committee this past summer because he thinks Roe v. Wade was wrongly decided. Pryor had reached that judgment (a judgment once shared by liberal icons like Archibald Cox and Alexander M. Bickel) as a man of logic, an adult knowledgeable about basic science, and a student of constitutional history. His judgment on Roe also coincided with the moral teaching of the Catholic Church, of which he is a devout member. For that reason, Pryor was subjected to an inquisition about his “deeply held beliefs” by Senator Charles Schumer of New York, who seems to think that only his deeply held beliefs about the abortion license created by Roe are to be countenanced.
And how did Schumer come to his position, which suggests that serious Catholics who bring religiously informed moral arguments to public life have no place on the federal bench—even if, like Bill Pryor, they have articulated those arguments in an accessible moral grammar and vocabulary that any serious person can engage? I suggest that Senator Schumer got there via the “sweet mystery of life” passage in Casey and its toxic effect on our public life.
Which means that Bill Pryor is not the only one at risk here. What was done to Alabama’s Catholic attorney general (elected to that office, incidentally, in a state where Catholics make up just 3.5 percent of the population) could as easily happen to an Orthodox Jew or an evangelical Protestant with the same views on Roe. Would a similar trajectory be at work with Lawrence? What would prevent it, if the Court continues to construe liberty as license and then enshrines license as the constitutional trump?
What, then, to do? On the gay-marriage implications of Lawrence, conservatives rightly respectful of the Constitution will be tempted to argue against the proposed Federal Marriage Amendment, on several grounds: that marriage has traditionally been a state matter (and an amendment would further erode federalism); that such an amendment would trivialize the Constitution; that there must be other remedies.
Perhaps there are other remedies. I doubt it, but the burden of proof, post-Lawrence, now rests with those who believe they exist. As for trivializing the Constitution, yes, it is a sadness that things have come to this; but one way or another, the Constitution is going to be amended with respect to marriage, and what is preferable, a Constitution tacitly amended by judicial fiat or a Constitution amended through proper constitutional procedures? As for the further erosion of federalism, that, too, is an unhappy prospect; but the homosexual movement has made no secret of its intention to federalize the question of marriage and has now been given a powerful weapon to do so in Lawrence, which seems likely to buttress claims that gay “marriages” or “civil unions” recognized in one state must be recognized in all states under the “full faith and credit” clause. I see no constitutionally serious way to meet this challenge, absent a Federal Marriage Amendment.
Beyond the marriage issue lies the question of revitalizing the democratic process so that issues like marriage and abortion are decided by the people who should decide them and in the forums in which they should be decided: that is, by the people of the United States, acting through their duly-elected representatives. If we are not to find ourselves in the ludicrous position of having to meet every act of federal judicial activism with yet another constitutional amendment, then surely it is time for a serious discussion of how to invoke Article III’s provision that Congress can declare “exceptions” to the Supreme Court’s appellate jurisdiction, in this case precisely to roll back the hegemony of the imperial judiciary.
I realize that this would take more courage than is usually found on Capitol Hill. But it is not easy to see another remedy that, over the long haul, will prevent the U.S. Constitution’s becoming analogous to the constitution of a banana republic. Happily, taking Article III seriously might just have the blessed effect of restoring a measure of robustness to the noble American experiment in democratic self-governance.
James Q. Wilson
When the Supreme Court renders an important decision, the reasons it gives are often more important than the choice it has made. Nowhere is this clearer than in the recent case of Lawrence v. Texas that struck down that state’s anti-sodomy law. Though I was delighted to see the law disappear, I was disturbed by the reasons the majority supplied for effecting that disappearance. The Court greatly widened the right of privacy, loosely based on the due-process clause of the Constitution, and justified this right with scarcely anything more than rhetorical flourishes.
The majority repeated in 2003 a view it had first expressed in 1992 in Planned Parenthood v. Casey, where it emphasized its commitment to Roe v. Wade in these words: “At the heart of liberty is the right to define one’s own concept of existence, of meaning, of the universe, and of the mystery of life.” It reinforced this view by referring to the decision of the European Court of Human Rights, representing 21 European nations, to strike down an Irish anti-sodomy law. It chose to ignore the hostility to sodomy in nations with far larger populations like China, India, Korea, and most African countries.
That some nations oppose sodomy and others ignore it is, of course, irrelevant to the issue. The central question for an American court is whether liberty means the right to define one’s own concept of existence and of the mystery of life. This issue, contrary to what some may suppose when they read such loose language, is very large. At stake is whether the Court’s majority is now inclined to support the libertarian view that no state has the authority to restrict conduct affecting only one’s self and one’s consenting partner. If so, this would mean that there could be no state law against prostitution, bestiality, heroin consumption, physician-assisted suicide, or gay marriage because all of these behaviors involve private, intimate actions that harm no human outsider. I find it hard to believe that the Court would apply its new-found policy so broadly; after all, it is not indifferent to political realities. (It did, for example, unanimously strike down a state law authorizing physician-assisted suicides. But would it repeat that decision today?) But then, in 1973 I found it even harder to believe that a right to privacy could support a constitutional right to abortion; nevertheless, it did.
Obviously, the behaviors I have named differ in their moral significance. But the Court seems to be more interested in privacy than in morality. The police power of the states is all about morality; the Court’s view is that this concern, along with, presumably, the police power that can give legal effect to it, is unimportant. I think that morality is quite important, though, confronted with a constitutional ban, not all-important. Banning or restricting prostitution makes good sense (one wonders what the Court would have done in the Texas case had one gay man paid the other for his services). Banning bestiality and heroin consumption makes even more sense (I will not make the arguments here, though I wonder at any reader who needs arguments). And allowing states to decide on abortion policy is eminently wise (most large and many smaller states would authorize abortions, albeit with the kinds of restrictions now in place in many European countries).
The Texas sodomy case could well have been decided on grounds that would not have broadened the notion of privacy or weakened the concept of state authority. In her concurring opinion, Justice O’Connor pointed out that the Texas law, since it applied only to homosexuals and not to heterosexuals, could have been struck down for denying to people the equal protection of the laws. Writing in the New Republic, Jeffrey Rosen endorsed her view, and I join him. This would have left the states free to ban sodomy for heterosexuals and homosexuals equally—something that the vast majority of states would, of course, be most reluctant to do—or otherwise use their police powers with respect to many matters of supposedly private conduct, provided only that they did not infringe an explicit provision of the Constitution.
Justice Scalia, in his dissent, rejected not only the majority opinion but O’Connor’s point. Apparently he thinks that the use of the equal-protection clause would open the door to homosexual marriages: if a law tried to prevent such marriages, the Court might later say that the law denied equal protection to gays and lesbians. O’Connor’s (and my) response to Scalia would be this: a law designed to maintain the traditional and near-universal concept of heterosexual marriage, with all that this implies about the complementary relation between the two sexes and the prospect of effective child-rearing, provides a rational basis for making a distinction between heterosexuals and homosexuals with respect to how they live together. And if there is a rational basis for distinguishing between two kinds of activities, then a law making that distinction is not vulnerable to an equal-protection argument.
Scalia was right, however, to observe that a majority of the Court had in this case joined the culture war, “departing from its role of assuring, as a neutral observer, that the democratic rules of engagement are observed.” What now remains is to discover whether, given the Court’s position on some matters, a democratic engagement is even possible.
I have no wish to punish gays and lesbians; it makes great sense to abolish laws that hinder their private conduct. But surely this could have been accomplished on more economical grounds, ones that would not have opened the door to broader appeals asserting that anything that seems private must be legitimate. This argument, redolent of what Christopher Jencks has called a laissez-fair culture, is eating at what ties Americans together—eating, that is, at our view that we can reconcile individual liberty and social mores in ways that keep the country great.
Even when the right of privacy was being invented by the Supreme Court, it reserved special consideration for heterosexual marriage. The case that authorized the sale of contraceptive materials (Griswold v. Connecticut, decided in 1965) was about marital privacy, not human privacy. Even after the next case, Eisenstadt v. Baird (1972), broadened access to contraceptives for unmarried persons, the Court held that the state could still treat extramarital sex and nonmarital sex as evils. After Lawrence, all of those genuflections toward marriage have become moot.
The implication of the Lawrence case is that we have to find ways of defending heterosexual marriage against the claim, sure to arise, that anybody, under any circumstances, can be “married,” provided only that this is the act of mature, consenting adults. One can make a decent, though to me ultimately unpersuasive, case for homosexual marriages. But one cannot, I think, make that case so effectively as to require it to be imposed by judicial order.
We imposed abortion by judicial order, and at a time when some states, like New York, had already legalized it. What we got was endless acrimony, because the judicial order was not the result of balancing competing political interests or adapting laws to the needs and preferences of different states. European nations allowed abortion, but because this was done by parliaments rather than by courts, the laws that were put in place called for various forms of medical consent and parental approval and in some cases imposed strict time limits.
We may commit the same kind of judicial suicide if the Court chooses (as, given the Lawrence case, it logically must choose) in favor of homosexual marriages. Endless public debate will be stirred up even though the number of gay or lesbian marriages is likely to be rather small. (In the Netherlands, which legalized same-sex marriages in 2001, fewer than 10 percent of an estimated 50,000 same-sex couples have chosen marriage. In 2002 there were 1,900 such marriages, compared with 85,500 male-female ones.) When Vermont legalized same-sex unions, Vermonters probably adjusted with rather little fussing. But if the Court says that the Vermont practice—or, worse, one that uses the word “marriage”—must become a national norm, the country will be divided deeply and fight about it for years to come.
The Defense of Marriage Act passed by many states will provide no barrier to a Court decision; such acts will become unconstitutional. So now there will be a fight over a Federal Marriage Amendment to the U.S. Constitution. The odds are very great against its passing; although a majority of the country favors reserving marriage for heterosexual couples, it is not an overwhelming majority. The Court is marching us toward the Netherlands—only there, at least, politicians and not six robed jurists made the decisions.
Choose your plan and pay nothing for six Weeks!
Has the Supreme Court Gone Too Far?
Must-Reads from Magazine
Terror is a choice.
Ari Fuld described himself on Twitter as a marketer and social media consultant “when not defending Israel by exposing the lies and strengthening the truth.” On Sunday, a Palestinian terrorist stabbed Fuld at a shopping mall in Gush Etzion, a settlement south of Jerusalem. The Queens-born father of four died from his wounds, but not before he chased down his assailant and neutralized the threat to other civilians. Fuld thus gave the full measure of devotion to the Jewish people he loved. He was 45.
The episode is a grim reminder of the wisdom and essential justice of the Trump administration’s tough stance on the Palestinians.
Start with the Taylor Force Act. The act, named for another U.S. citizen felled by Palestinian terror, stanched the flow of American taxpayer fund to the Palestinian Authority’s civilian programs. Though it is small consolation to Fuld’s family, Americans can breathe a sigh of relief that they are no longer underwriting the PA slush fund used to pay stipends to the family members of dead, imprisoned, or injured terrorists, like the one who murdered Ari Fuld.
No principle of justice or sound statesmanship requires Washington to spend $200 million—the amount of PA aid funding slashed by the Trump administration last month—on an agency that financially induces the Palestinian people to commit acts of terror. The PA’s terrorism-incentive budget—“pay-to-slay,” as Douglas Feith called it—ranges from $50 million to $350 million annually. Footing even a fraction of that bill is tantamount to the American government subsidizing terrorism against its citizens.
If we don’t pay the Palestinians, the main line of reasoning runs, frustration will lead them to commit still more and bloodier acts of terror. But U.S. assistance to the PA dates to the PA’s founding in the Oslo Accords, and Palestinian terrorists have shed American and Israeli blood through all the years since then. What does it say about Palestinian leaders that they would unleash more terror unless we cross their palms with silver?
President Trump likewise deserves praise for booting Palestinian diplomats from U.S. soil. This past weekend, the State Department revoked a visa for Husam Zomlot, the highest-ranking Palestinian official in Washington. The State Department cited the Palestinians’ years-long refusal to sit down for peace talks with Israel. The better reason for expelling them is that the label “envoy” sits uneasily next to the names of Palestinian officials, given the links between the Palestine Liberation Organization, President Mahmoud Abbas’s Fatah faction, and various armed terrorist groups.
Fatah, for example, praised the Fuld murder. As the Jerusalem Post reported, the “al-Aqsa Martyrs Brigades, the military wing of Fatah . . . welcomed the attack, stressing the necessity of resistance ‘against settlements, Judaization of the land, and occupation crimes.’” It is up to Palestinian leaders to decide whether they want to be terrorists or statesmen. Pretending that they can be both at once was the height of Western folly, as Ari Fuld no doubt recognized.
May his memory be a blessing.
Choose your plan and pay nothing for six Weeks!
The end of the water's edge.
It was the blatant subversion of the president’s sole authority to conduct American foreign policy, and the political class received it with fury. It was called “mutinous,” and the conspirators were deemed “traitors” to the Republic. Those who thought “sedition” went too far were still incensed over the breach of protocol and the reckless way in which the president’s mandate was undermined. Yes, times have certainly changed since 2015, when a series of Republican senators signed a letter warning Iran’s theocratic government that the Joint Comprehensive Plan of Action (aka, the Iran nuclear deal) was built on a foundation of sand.
The outrage that was heaped upon Senate Republicans for freelancing on foreign policy in the final years of Barack Obama’s administration has not been visited upon former Secretary of State John Kerry, though he arguably deserves it. In the publicity tour for his recently published memoir, Kerry confessed to conducting meetings with Iranian Foreign Minister Javad Zarif “three or four times” as a private citizen. When asked by Fox News Channel’s Dana Perino if Kerry had advised his Iranian interlocutor to “wait out” the Trump administration to get a better set of terms from the president’s successor, Kerry did not deny the charge. “I think everybody in the world is sitting around talking about waiting out President Trump,” he said.
Think about that. This is a former secretary of state who all but confirmed that he is actively conducting what the Boston Globe described in May as “shadow diplomacy” designed to preserve not just the Iran deal but all the associated economic relief and security guarantees it provided Tehran. The abrogation of that deal has put new pressure on the Iranians to liberalize domestically, withdraw their support for terrorism, and abandon their provocative weapons development programs—pressures that the deal’s proponents once supported.
“We’ve got Iran on the ropes now,” said former Democratic Sen. Joe Lieberman, “and a meeting between John Kerry and the Iranian foreign minister really sends a message to them that somebody in America who’s important may be trying to revive them and let them wait and be stronger against what the administration is trying to do.” This is absolutely correct because the threat Iran poses to American national security and geopolitical stability is not limited to its nuclear program. The Iranian threat will not be neutralized until it abandons its support for terror and the repression of its people, and that will not end until the Iranian regime is no more.
While Kerry’s decision to hold a variety of meetings with a representative of a nation hostile to U.S. interests is surely careless and unhelpful, it is not uncommon. During his 1984 campaign for the presidency, Jesse Jackson visited the Soviet Union and Cuba to raise his own public profile and lend credence to Democratic claims that Ronald Reagan’s confrontational foreign policy was unproductive. House Speaker Jim Wright’s trip to Nicaragua to meet with the Sandinista government was a direct repudiation of the Reagan administration’s support for the country’s anti-Communist rebels. In 2007, as Bashar al-Assad’s government was providing material support for the insurgency in Iraq, House Speaker Nancy Pelosi sojourned to Damascus to shower the genocidal dictator in good publicity. “The road to Damascus is a road to peace,” Pelosi insisted. “Unfortunately,” replied George W. Bush’s national security council spokesman, “that road is lined with the victims of Hamas and Hezbollah, the victims of terrorists who cross from Syria into Iraq.”
Honest observers must reluctantly conclude that the adage is wrong. American politics does not, in fact, stop at the water’s edge. It never has, and maybe it shouldn’t. Though it may be commonplace, American political actors who contradict the president in the conduct of their own foreign policy should be judged on the policies they are advocating. In the case of Iran, those who seek to convince the mullahs and their representatives that repressive theocracy and a terroristic foreign policy are dead-ends are advancing the interests not just of the United States but all mankind. Those who provide this hopelessly backward autocracy with the hope that America’s resolve is fleeting are, as John Kerry might say, on “the wrong side of history.”
Choose your plan and pay nothing for six Weeks!
Michael Wolff is its Marquis de Sade. Released on January 5, 2018, Wolff’s Fire and Fury became a template for authors eager to satiate the growing demand for unverified stories of Trump at his worst. Wolff filled his pages with tales of the president’s ignorant rants, his raging emotions, his television addiction, his fast-food diet, his unfamiliarity with and contempt for Beltway conventions and manners. Wolff made shocking insinuations about Trump’s mental state, not to mention his relationship with UN ambassador Nikki Haley. Wolff’s Trump is nothing more than a knave, dunce, and commedia dell’arte villain. The hero of his saga is, bizarrely, Steve Bannon, who in Wolff’s telling recognized Trump’s inadequacies, manipulated him to advance a nationalist-populist agenda, and tried to block his worst impulses.
Wolff’s sources are anonymous. That did not slow down the press from calling his accusations “mind-blowing” (Mashable.com), “wild” (Variety), and “bizarre” (Entertainment Weekly). Unlike most pornographers, he had a lesson in mind. He wanted to demonstrate Trump’s unfitness for office. “The story that I’ve told seems to present this presidency in such a way that it says that he can’t do this job, the emperor has no clothes,” Wolff told the BBC. “And suddenly everywhere people are going, ‘Oh, my God, it’s true—he has no clothes.’ That’s the background to the perception and the understanding that will finally end this, that will end this presidency.”
Nothing excites the Resistance more than the prospect of Trump leaving office before the end of his term. Hence the most stirring examples of Resistance Porn take the president’s all-too-real weaknesses and eccentricities and imbue them with apocalyptic significance. In what would become the standard response to accusations of Trumpian perfidy, reviewers of Fire and Fury were less interested in the truth of Wolff’s assertions than in the fact that his argument confirmed their preexisting biases.
Saying he agreed with President Trump that the book is “fiction,” the Guardian’s critic didn’t “doubt its overall veracity.” It was, he said, “what Mailer and Capote once called a nonfiction novel.” Writing in the Atlantic, Adam Kirsch asked: “No wonder, then, Wolff has written a self-conscious, untrustworthy, postmodern White House book. How else, he might argue, can you write about a group as self-conscious, untrustworthy, and postmodern as this crew?” Complaining in the New Yorker, Masha Gessen said Wolff broke no new ground: “Everybody” knew that the “president of the United States is a deranged liar who surrounded himself with sycophants. He is also functionally illiterate and intellectually unsound.” Remind me never to get on Gessen’s bad side.
What Fire and Fury lacked in journalistic ethics, it made up in receipts. By the third week of its release, Wolff’s book had sold more than 1.7 million copies. His talent for spinning second- and third-hand accounts of the president’s oddity and depravity into bestselling prose was unmistakable. Imitators were sure to follow, especially after Wolff alienated himself from the mainstream media by defending his innuendos about Haley.
It was during the first week of September that Resistance Porn became a competitive industry. On the afternoon of September 4, the first tidbits from Bob Woodward’s Fear appeared in the Washington Post, along with a recording of an 11-minute phone call between Trump and the white knight of Watergate. The opposition began panting soon after. Woodward, who like Wolff relies on anonymous sources, “paints a harrowing portrait” of the Trump White House, reported the Post.
No one looks good in Woodward’s telling other than former economics adviser Gary Cohn and—again bizarrely—the former White House staff secretary who was forced to resign after his two ex-wives accused him of domestic violence. The depiction of chaos, backstabbing, and mutual contempt between the president and high-level advisers who don’t much care for either his agenda or his personality was not so different from Wolff’s. What gave it added heft was Woodward’s status, his inviolable reputation.
“Nothing in Bob Woodward’s sober and grainy new book…is especially surprising,” wrote Dwight Garner at the New York Times. That was the point. The audience for Wolff and Woodward does not want to be surprised. Fear is not a book that will change minds. Nor is it intended to be. “Bob Woodward’s peek behind the Trump curtain is 100 percent as terrifying as we feared,” read a CNN headline. “President Trump is unfit for office. Bob Woodward’s ‘Fear’ confirms it,” read an op-ed headline in the Post. “There’s Always a New Low for the Trump White House,” said the Atlantic. “Amazingly,” wrote Susan Glasser in the New Yorker, “it is no longer big news when the occupant of the Oval Office is shown to be callous, ignorant, nasty, and untruthful.” How could it be, when the press has emphasized nothing but these aspects of Trump for the last three years?
The popular fixation with Trump the man, and with the turbulence, mania, frenzy, confusion, silliness, and unpredictability that have surrounded him for decades, serves two functions. It inoculates the press from having to engage in serious research into the causes of Trump’s success in business, entertainment, and politics, and into the crises of borders, opioids, stagnation, and conformity of opinion that occasioned his rise. Resistance Porn also endows Trump’s critics, both external and internal, with world-historical importance. No longer are they merely journalists, wonks, pundits, and activists sniping at a most unlikely president. They are politically correct versions of Charles Martel, the last line of defense preventing Trump the barbarian from enacting the policies on which he campaigned and was elected.
How closely their sensational claims and inflated self-conceptions track with reality is largely beside the point. When the New York Times published the op-ed “I am Part of the Resistance Inside the Trump Administration,” by an anonymous “senior official” on September 5, few readers bothered to care that the piece contained no original material. The author turned policy disagreements over trade and national security into a psychiatric diagnosis. In what can only be described as a journalistic innovation, the author dispensed with middlemen such as Wolff and Woodward, providing the Times the longest background quote in American history. That the author’s identity remains a secret only adds to its prurient appeal.
“The bigger concern,” the author wrote, “is not what Mr. Trump has done to the presidency but what we as a nation have allowed him to do to us.” Speak for yourself, bud. What President Trump has done to the Resistance is driven it batty. He’s made an untold number of people willing to entertain conspiracy theories, and to believe rumor is fact, hyperbole is truth, self-interested portrayals are incontrovertible evidence, credulity is virtue, and betrayal is fidelity—so long as all of this is done to stop that man in the White House.
Choose your plan and pay nothing for six Weeks!
Review of 'Stanley Kubrick' By Nathan Abrams
Except for Stanley Donen, every director I have worked with has been prone to the idea, first propounded in the 1950s by François Truffaut and his tendentious chums in Cahiers du Cinéma, that directors alone are authors, screenwriters merely contingent. In singular cases—Orson Welles, Michelangelo Antonioni, Woody Allen, Kubrick himself—the claim can be valid, though all of them had recourse, regular or occasional, to helping hands to spice their confections.
Kubrick’s variety of topics, themes, and periods testifies both to his curiosity and to his determination to “make it new.” Because his grades were not high enough (except in physics), this son of a Bronx doctor could not get into colleges crammed with returning GIs. The nearest he came to higher education was when he slipped into accessible lectures at Columbia. He told me, when discussing the possibility of a movie about Julius Caesar, that the great classicist Moses Hadas made a particularly strong impression.
While others were studying for degrees, solitary Stanley was out shooting photographs (sometimes with a hidden camera) for Look magazine. As a movie director, he often insisted on take after take. This gave him choices of the kind available on the still photographer’s contact sheets. Only Peter Sellers and Jack Nicholson had the nerve, and irreplaceable talent, to tell him, ahead of shooting, that they could not do a particular scene more than two or three times. The energy to electrify “Mein Führer, I can walk” and “Here’s Johnny!” could not recur indefinitely. For everyone else, “Can you do it again?” was the exhausting demand, and it could come close to being sadistic.
The same method could be applied to writers. Kubrick might recognize what he wanted when it was served up to him, but he could never articulate, ahead of time, even roughly what it was. Picking and choosing was very much his style. Cogitation and opportunism went together: The story goes that he attached Strauss’s Blue Danube to the opening sequence of 2001 because it happened to be playing in the sound studio when he came to dub the music. Genius puts chance to work.
Until academics intruded lofty criteria into cinema/film, the better to dignify their speciality, Alfred Hitchcock’s attitude covered most cases: When Ingrid Bergman asked for her motivation in walking to the window, Hitch replied, fatly, “Your salary.” On another occasion, told that some scene was not plausible, Hitch said, “It’s only a movie.” He did not take himself seriously until the Cahiers du Cinéma crowd elected to make him iconic. At dinner, I once asked Marcello Mastroianni why he was so willing to play losers or clowns. Marcello said, “Beh, cinema non e gran’ cosa” (cinema is no big deal). Orson Welles called movie-making the ultimate model-train set.
That was then; now we have “film studies.” After they moved in, academics were determined that their subject be a very big deal indeed. Comedy became no laughing matter. In his monotonous new book, the film scholar Nathan Abrams would have it that Stanley Kubrick was, in essence, a “New York Jewish intellectual.” Abrams affects to unlock what Stanley was “really” dealing with, in all his movies, never mind their apparent diversity. It is declared to be, yes, Yiddishkeit, and in particular, the Holocaust. This ground has been tilled before by Geoffrey Cocks, when he argued that the room numbers in the empty Overlook Hotel in The Shining encrypted references to the Final Solution. Abrams would have it that even Barry Lyndon is really all about the outsider seeking, and failing, to make his awkward way in (Gentile) Society. On this reading, Ryan O’Neal is seen as Hannah Arendt’s pariah in 18th-century drag. The movie’s other characters are all engaged in the enjoyment of “goyim-naches,” an expression—like menschlichkayit—he repeats ad nauseam, lest we fail to get the stretched point.
Theory is all when it comes to the apotheosis of our Jew-ridden Übermensch. So what if, in order to make a topic his own, Kubrick found it useful to translate its logic into terms familiar to him from his New York youth? In Abrams’s scheme, other mundane biographical facts count for little. No mention is made of Stanley’s displeasure when his 14-year-old daughter took a fancy to O’Neal. The latter was punished, some sources say, by having Barry’s voiceover converted from first person so that Michael Hordern would displace the star as narrator. By lending dispassionate irony to the narrative, it proved a pettish fluke of genius.
While conning Abrams’s volume, I discovered, not greatly to my chagrin, that I am the sole villain of the piece. Abrams calls me “self-serving” and “unreliable” in my accounts of my working and personal relationship with Stanley. He insinuates that I had less to do with Eyes Wide Shut than I pretend and that Stanley regretted my involvement. It is hard for him to deny (but convenient to omit) that, after trying for some 30 years to get a succession of writers to “crack” how to do Schnitzler’s Traumnovelle, Kubrick greeted my first draft with “I’m absolutely thrilled.” A source whose anonymity I respect told me that he had never seen Stanley so happy since the day he received his first royalty check (for $5 million) for 2001. No matter.
Were Abrams (the author also of a book as hostile to Commentary as this one is to me) able to put aside his waxed wrath, he might have quoted what I reported in my memoir Eyes Wide Open to support his Jewish-intellectual thesis. One day, Stanley asked me what a couple of hospital doctors, walking away with their backs to the camera, would be talking about. We were never going to hear or care what it was, but Stanley—at that early stage of development—said he wanted to know everything. I said, “Women, golf, the stock market, you know…”
“Couple of Gentiles, right?”
“That’s what you said you wanted them to be.”
“Those people, how do we ever know what they’re talking about when they’re alone together?”
“Come on, Stanley, haven’t you overheard them in trains and planes and places?”
Kubrick said, “Sure, but…they always know you’re there.”
If he was even halfway serious, Abrams’s banal thesis that, despite decades of living in England, Stanley never escaped the Old Country, might have been given some ballast.
Now, as for Stanley Kubrick’s being an “intellectual.” If this implies membership in some literary or quasi-philosophical elite, there’s a Jewish joke to dispense with it. It’s the one about the man who makes a fortune, buys himself a fancy yacht, and invites his mother to come and see it. He greets her on the gangway in full nautical rig. She says, “What’s with the gold braid already?”
“Mama, you have to realize, I’m a captain now.”
She says, “By you, you’re a captain, by me, you’re a captain, but by a captain, are you a captain?”
As New York intellectuals all used to know, Karl Popper’s definition of bad science, and bad faith, involves positing a theory and then selecting only whatever data help to furnish its validity. The honest scholar makes it a matter of principle to seek out elements that might render his thesis questionable.
Abrams seeks to enroll Lolita in his obsessive Jewish-intellectual scheme by referring to Peter Arno, a New Yorker cartoonist whom Kubrick photographed in 1949. The caption attached to Kubrick’s photograph in Look asserted that Arno liked to date “fresh, unspoiled girls,” and Abrams says this “hint[s] at Humbert Humbert in Lolita.” Ah, but Lolita was published, in Paris, in 1955, six years later. And how likely is it, in any case, that Kubrick wrote the caption?
The film of Lolita is unusual for its garrulity. Abrams’s insistence on the sinister Semitic aspect of both Clare Quilty and Humbert Humbert supposedly drawing Kubrick like moth to flame is a ridiculous camouflage of the commercial opportunism that led Stanley to seek to film the most notorious novel of the day, while fudging its scandalous eroticism.
That said, in my view, The Killing, Paths of Glory, Barry Lyndon, and Clockwork Orange were and are sans pareil. The great French poet Paul Valéry wrote of “the profundity of the surface” of a work of art. Add D.H. Lawrence’s “never trust the teller, trust the tale,” and you have two authoritative reasons for looking at or reading original works of art yourself and not relying on academic exegetes—especially when they write in the solemn, sometimes ungrammatical style of Professor Abrams, who takes time out to tell those of us at the back of his class that padre “is derived from the Latin pater.”
Abrams writes that I “claim” that I was told to exclude all overt reference to Jews in my Eyes Wide Shut screenplay, with the fatuous implication that I am lying. I am again accused of “claiming” to have given the name Ziegler to the character played by Sidney Pollack, because I once had a (quite famous) Hollywood agent called Evarts Ziegler. So I did. The principal reason for Abrams to doubt my veracity is that my having chosen the name renders irrelevant his subsequent fanciful digression on the deep, deep meanings of the name Ziegler in Jewish lore; hence he wishes to assign the naming to Kubrick. Pop goes another wished-for proof of Stanley’s deep and scholarly obsession with Yiddishkeit.
Abrams would be a more formidable enemy if he could turn a single witty phrase or even abstain from what Karl Kraus called mauscheln, the giveaway jargon of Jewish journalists straining to pass for sophisticates at home in Gentile circles. If you choose, you can apply, on line, for screenwriting lessons from Nathan Abrams, who does not have a single cinematic credit to his name. It would be cheaper, and wiser, to look again, and then again, at Kubrick’s masterpieces.
Choose your plan and pay nothing for six Weeks!
Is American opera in terminal condition?
At the Met, distinguished singers and conductors, mostly born and trained in Europe, appeared in theatrically conservative big-budget productions of the popular operas of the 19th century, with a sprinkling of pre-romantic and modern works thrown in to leaven the loaf. City Opera, by contrast, presented younger artists—many, like Beverly Sills, born in this country—in a wider-ranging, more adventurously staged repertoire that often included new operas, some of them written by American composers, to which the public was admitted at what were then called “popular prices.”
Between them, the companies represented a feast for culture-consuming New Yorkers, though complaints were already being heard that their new theaters were too big. Moreover, neither the Met nor City Opera was having any luck at commissioning memorable new operas and thereby expanding and refreshing the operatic repertoire, to which only a handful of significant new works—none of them, then or since, premiered by either company—had been added since World War I.
A half-century later, the feast has turned to famine. In 2011, New York City Opera left Lincoln Center, declaring bankruptcy. It closed its doors forever two years later. The Met has weathered a nearly uninterrupted string of crises that climaxed earlier this year with the firing of James Levine, the company’s once-celebrated music director emeritus. He was accused in 2017 of molesting teenage musicians and was dismissed from all of his conducting posts in New York and elsewhere. Today the Met is in dire financial straits that threaten its long-term survival.
And while newer opera companies in such other American cities as Chicago, Houston, San Francisco, Santa Fe, and Seattle now offer alternative models of leadership, none has established itself as a potential successor either to the Met or the now-defunct NYCO.1
Is American opera as a whole in a terminal condition? Or are the collapse of the New York City Opera and the Met’s ongoing struggle to survive purely local matters of no relevance elsewhere? Heidi Waleson addresses these questions in Mad Scenes and Exit Arias: The Death of the New York City Opera and the Future of Opera in America.2 Waleson draws on her experience as the opera critic of the Wall Street Journal to speculate on the prospects for an art form that has never quite managed to set down firm roots in American culture.
In this richly informative chronicle of NYCO’s decline and fall, Waleson persuasively argues that what happened to City Opera (and, by extension, the Met) could happen to other opera companies as well. The days in which an ambitious community sought successfully to elevate itself into the first rank of world cities by building and manning an opera house are long past, and Mad Scenes and Exit Arias helps us understand why.As Waleson reminds us, it was Fiorello LaGuardia, the New York mayor who played a central role in the creation of the NYCO, who dubbed the company “the people’s opera” when it was founded in 1943. According to LaGuardia, NYCO existed to perform popular operas at popular prices for a mass audience. In later years, it moved away from that goal, but the slogan stuck. Indeed, no opera company has ever formulated a clearer statement of its institutional mission.
Even after it moved to Lincoln Center in 1966, NYCO had an equally coherent and similarly appealing purpose: It was where you went to see the opera stars of tomorrow, foremost among them Sills and Plácido Domingo, in inexpensively but imaginatively staged productions of the classics. The company went out of its way to present modern operas, too, but it never did so at the expense of its central repertoire—and tickets to its performances cost half of what the Met charged. Well into the 21st century, City Opera stuck more or less closely to its redefined mission. Under Paul Kellogg, the general and artistic director from 1996 to 2007, it did so with consistent artistic success. But revenues declined throughout the latter part of Kellogg’s tenure, in part because younger New Yorkers were unwilling to become subscribers.
In those days, the Metropolitan Opera, NYCO’s next-door neighbor, was still one of the world’s most conservative opera houses. That changed when Peter Gelb became its general manager in 2006. Gelb was resolved to modernize the Met’s productions and, to a lesser extent, its repertoire, and he simultaneously sought to heighten its national profile by digitally simulcasting live performances into movie theaters throughout America.
Kellogg was frustrated by the chronic acoustic inadequacies of the New York State Theater and sought in vain to move City Opera to a three-theater complex that was to be built (but never was) on the World Trade Center site. He retired soon after Gelb came to the Met. Kellogg was succeeded by Gérard Mortier, a European impresario who was accustomed to working in state-subsidized theaters. Mortier made a pair of fateful decisions. First, he canceled City Opera’s entire 2008–2009 season while the interior of the State Theater underwent much-needed renovations. Then he announced a follow-up season of 20th-century operas that lacked audience appeal.
That follow-up season never happened, because Mortier resigned in 2008 and fled New York. He was replaced by George Steel, who had previously served for just three months as general manager of the Dallas Opera. Under Steel, NYCO slashed its schedule to ribbons in a futile attempt to get back on its financial feet after Mortier’s financially ruinous year-long hiatus. Then he mounted a series of productions of nonstandard repertory that received mixed reviews and flopped at the box office.
The combined effect of Gelb’s innovations and the inept leadership of Mortier and Steel all but obliterated City Opera’s reason for existing. Under Gelb, the Met’s repertory ranged from such warhorses as Rigoletto and Tosca to 20th-century masterpieces like Benjamin Britten’s Midsummer Night’s Dream and Alban Berg’s Wozzeck, and tickets could be bought for as little as $20. With the Met performing a more interesting repertoire under a wider range of directors, and in part at “people’s prices,” City Opera no longer did anything that the Met wasn’t already doing on a far larger and better-financed scale. What, then, was its mission now? The truth was that it had none, and when the company went under in 2013, few mourned its passing.
As it happened, Gelb’s own innovations were a mere artistic Band-aid, for he was unwilling or unable to trim the Met’s bloated budget to any meaningful extent. He made no serious attempt to cut the company’s labor costs until a budget crisis in 2014 forced him to confront its unions, which he did with limited success. In addition, his new productions of the standard-repertory operas on which the Met relied to draw and hold older subscribers were felt by many to be trashily trendy.
The Met had particular difficulty managing the reduced circumstances of the 21st century when it came to opera. Its 3,800-seat theater has an 80-foot-deep stage with a proscenium opening that measures 54 feet on each side. (Bayreuth, by contrast, seats 1,925, La Scala 2,030, and the Vienna State Opera 2,200.) As a result, it is all but impossible to mount low-to-medium-budget shows in the Metropolitan Opera House, even as the company finds it is no longer able to fill its increasingly empty house. Two decades ago, the Met earned 90 percent of its potential box-office revenue. That figure plummeted to 66 percent by 2015, forcing Gelb to raise ticket prices to an average of $158.50 per head. On Broadway, the average price of a ticket that season was $103.86.
Above all, Gelb was swimming against the cultural tide. Asked about the effects on audience development of the Met simulcasts, he admitted that three-quarters of the people who attended them were “over 65, and 30 percent of them are over 75.” As he explained: “Grand opera is in itself a kind of a dinosaur of an art form…. The question is not whether I think I’m doing a good job or not in trying to keep the [Metropolitan Opera] alive. It’s whether I’m doing a good job or not in the face of a cultural and social rejection of opera as an art form. And what I’m doing is fighting an uphill battle to try and maintain an audience in a very difficult time.”
Was that statement buck-passing defeatism, or a fair appraisal of the state of American opera? Other opera executives distanced themselves from Gelb’s remarks, and it was true—and still is—that smaller American companies have done a somewhat better job of attracting younger audiences than the top-heavy Met. But according to the National Endowment for the Arts, the percentage of U.S. adults who attend at least one operatic performance each year declined from 3.2 percent in 2002 to 2.1 percent in 2012. This problem, of course, is not limited to opera. As I wrote in these pages in 2010, the disappearance of secondary-school arts education and the rise of digital media may well be leading to “not merely a decline in public interest in the fine arts but the death of the live audience as a cultural phenomenon.”3D oes American opera have a future in an era of what Heidi Waleson succinctly describes as “flat ticket income and rising expenses”? In the last chapter of Mad Scenes and Exit Arias, she chronicles the activities of a group of innovative smaller troupes that are “rethinking what an opera company is, what it does, and who it serves.” Yet in the same breath, she acknowledges the possibility that “filling a giant theater for multiple productions of grand operas [is] no longer an achievable goal.”
If that is so, then it may be worth asking a different question: Did American opera ever have a past? It is true that opera in America has had a great and glorious history, but virtually the whole of that history consisted of American productions of 18th- and 19th-century European operas. By contrast, no opera by an American classical composer has ever entered the international major-house repertoire. Indeed, while new American operas are still commissioned and premiered at an impressive rate, few things are so rare as a second production of any of these works.
While a handful continue to be performed—John Adams’s Nixon in China (1987), André Previn’s A Streetcar Named Desire (1995), Mark Adamo’s Little Women (1998), and Jake Heggie’s Dead Man Walking (2000)—their success is a tribute to the familiarity of their subject matter and source material, not their musico-theatrical quality. As for the rest, the hard but inescapable truth is that with the exception of George Gershwin’s Porgy and Bess (1935), virtually all large-scale American operas have been purpose-written novelties that were shelved and forgotten immediately after their premieres.
The success of Porgy and Bess, which received its premiere not in an opera house but on Broadway, reminds us that American musical comedy, unlike American opera, is deeply rooted in our national culture, in much the same way that grand opera is no less deeply rooted in the national cultures of Germany and Italy, where it is still genuinely popular (if less so today than a half-century ago). By comparison with Porgy, Carousel, Guys and Dolls, or My Fair Lady, American opera as a homegrown form simply does not exist: It is merely an obscure offshoot of its European counterpart. Aaron Copland, America’s greatest composer, was not really joking when he wittily described opera as “la forme fatale,” and his own failed attempts to compose an audience-friendly opera that would be as successful as his folk-flavored ballet scores say much about the difficulties facing any composer who seeks to follow in his footsteps.
It is not that grand opera is incapable of appealing to American theatergoers. Even now, there are many Americans who love it passionately, just as there are regional companies such as Chicago’s Lyric Opera and San Francisco Opera that have avoided making the mistakes that closed City Opera’s doors. Yet the crises from which the Metropolitan Opera has so far failed to extricate itself suggest that in the absence of the generous state subsidies that keep European opera houses in business, large-house grand opera in America may simply be too expensive to thrive—or, ultimately, to survive. At its best, no art form is more thrilling or seductive. But none is at greater risk of following the dinosaurs down the cold road to extinction.
1 The “New York City Opera” founded in 2016 that now mounts operas in various New York theaters on an ad hoc basis is a brand-new enterprise that has no connection with its predecessor.
2 Metropolitan Books, 304 pages