Every once in a while, we come upon an event of seemingly minor import which, on reflection, turns out to…
Every once in a while, we come upon an event of seemingly minor import which, on reflection, turns out to betoken deep and problematic truths about our culture. The “Patenting of Life” decision is such a significant event.
On June 16, 1980, the Supreme Court of the United States ruled that a living microorganism was patentable matter, under the provision of patent laws enacted by Congress in 1952. In 1972, Ananda Chakrabarty, a microbiologist at the University of Illinois, had filed patent application, assigned to the General Electric Company, asserting multiple claims related to a novel bacterial strain that he had obtained with the aid of techniques of genetic engineering, a strain capable of degrading many components of crude oil and thus potentially useful in the biological control of oil spills. In addition to readily granted process claims for the method of producing the bacterium, and claims relating to the mode of carrying such bacteria to water-borne oil spills, Chakrabarty claimed patent rights to the bacteria themselves. This last claim, at first rejected by the patent examiner and then by the Patent Office Board of Appeals, was finally granted on appeal by the United States Court of Customs and Patent Appeals in 1979, in the decision affirmed by a narrow five-four vote of the Supreme Court a year later (Diamond v. Chakrabarty, 447 U.S. 303).
The case attracted considerable attention, but the Court’s decision fell short of the momentous ruling some had anticipated. For one thing, the Court was divided. For another, both sides agreed that the question before them was simply “a narrow one of statutory interpretation,” requiring the Court to construe the language of that section of the patent law which defined patentable matter. The Court’s opinion, and the dissent, were largely technical. Thus, readers of the opinion who looked for large philosophical dicta about man’s art and living nature or about genetic engineering came away disappointed. Alas, it looked as if the Court was, for a change, being simply judicious, doing no more than its proper work.
Yet the decision was not inconsequential. Indeed, it has already contributed to numerous recent practices. Patent claims are now pending for other living microorganisms, as well as for animal cell lines propagated in tissue culture, allegedly valuable for uses ranging from a cheaper means of making penicillin to novel treatments for specific cancers. Genetic-engineering firms are springing up all around. Academic molecular biologists are being courted by industry, with astounding financial incentives. Major grants for genetic-engineering research to universities have been given by industries in exchange for patent rights to any resulting useful and profitable discoveries. Under such an agreement, Hoechst, the German chemical company, has just given $50 million for a new genetic-engineering institute to Harvard Unversity, which, after considerable faculty opposition, had only recently abandoned plans to form its own genetic-engineering company. Many industries are tooling up in anticipation of the flood of new organisms and cell lines to be brought into being with the aid of human ingenuity, spurred on by our ingenious stroke to encourage genius, the patent laws. True, the art of genetic engineering was born and would grow without the Chakrabarty decision. But there is no question that it will now grow much, much faster.
But the Chakrabarty decision is useful in yet another, perhaps more fundamental, respect. It is useful for thought, for reflection on the relation between modern science and politics, and between science and the American polity in particular, especially as that relation is embodied in and exemplified by the patent laws. Indeed, the Chakrabarty case provides a wonderful mirror in which we can see fundamental features of the American polity, and therewith of modernity itself, and discern some of its deeper tensions: the relation of private interests or rights and the common good; the purposes of science and thought and their relation to practice and to the public interest; and, finally, the prevailing view of man’s place in and attitude toward the natural world. Before looking into that mirror, we need to describe some contours of the broader background.
Science in the public interest is a guiding intention of modern science and has been since its origins in the 17th century. Though we hear much about the distinction between pure and applied science—and I too shall distinguish them later—we must begin by emphasizing the essentially practical and social intention of modern science as such. Unlike ancient science, which sought knowledge of what things are, to be contemplated as an end-in-itself satisfying to the knower, modern science seeks knowledge of how things work, to be used as a means for the relief and comfort of all humanity, knowers and non-knowers alike. Standing on the threshold of the new science of mathematical physics, Descartes appeals for popular support of his researches by announcing the good news of knowledge “very useful in life”:
[S]o soon as I had acquired some general notions concerning Physics . . . I believed that I could not keep them concealed without greatly sinning against the law which obliges us to procure, as much as in us lies, the general good of all mankind. For they caused me to see that it is possible to attain knowledge which is very useful in life, and that, instead of that speculative philosophy which is taught in the Schools, we may find a practical philosophy by means of which, knowing the force and the action of fire, water, air, the stars, heaven, and all the other bodies that environ us, as distinctly as we know the different crafts of our artisans, we can in the same way employ them in all those uses to which they are adapted, and thus render ourselves the masters and possessors of nature. This is not merely to be desired with a view to the invention of an infinity of arts and crafts which enable us to enjoy without any trouble the fruits of the earth and all the good things which are to be found there, but also principally because it brings about the preservation of health, which is without doubt the chief blessing and the foundation of all other blessings in this life. For the mind depends so much on the temperament and disposition of the bodily organs that, if it is possible to find a means of rendering men wiser and cleverer than they have hitherto been, I believe that it is in medicine that it must be sought. (Discourse on Method, Part VI. Emphasis added.)
The announced goal of the new science is the mastery and possession of nature, and the purposes of mastery are humanitarian: the conquest of external necessity, the promotion of bodily health and longevity, the provision of psychic peace or a new kind of wisdom.
Even the notions and ways of science manifest a conception of knowledge for the sake of power: nature is conceived mechanistically, and explanation is in terms of efficient or moving causes; hidden truths are gained by acting on nature, i.e., through experiment; inquiry is made “methodical,” through the imposition of order and schemes of measurement “made” by the intellect; knowledge, embodied in laics rather than theorems, becomes “systematic” under the rules of a new mathematics expressly invented for this purpose. Modern science rejects, as meaningless or useless, questions that cannot be answered by application of the method. In all these fundamental ways, modern science has a practical cast. This remains true of the science practiced even by those great scientists who are driven by curiosity and the desire for truth and who have themselves no interest in that mastery and possession of nature for which science is largely esteemed by the rest of us.
Though essentially linked to practice, modern science is, in certain important respects, morally neutral. It does not itself seek knowledge of the good. Indeed, it looks upon nature, its object, as neutral and indifferent to the good or the beautiful. Moreover, the technical power it yields can be used for good or ill. Nevertheless, modern science is guided overall by an ethical—if prideful—intention: a lifting up of downtrodden humanity, a reversal of the curses laid upon Adam and Eve, and, ultimately, a restoration of the tree of life, by means of the tree of knowledge. Never mind the question how a science invincibly ignorant of and in principle skeptical about standards of better and worse can know how to do good for mankind. The new humanitarians simply point to the seemingly self-evident truth that life becomes better as it becomes less poor, nasty, brutish, and short.
Gradually, and increasingly as it began to make good its promise of technological fruit from the tree of useful knowledge, science was welcomed into partnership with the political community. Yet thoughtful men disagreed sharply about how science and the useful arts would and should relate to morals and politics. Close to one extreme was the view that popular enlightenment—and particularly the teachings of modern science—undermined ruling opinions and beliefs, especially religious beliefs, necessary for a good regime, and that unbridled progress would lead to luxury, the liberation and inflation of vain and foolish desires, and the debasement of morals and taste. Though they welcomed science’s contributions to health and plenty, these thinkers argued the need for settled laws, customs, and mores to restrain the turbulent and licentious souls of men. In the absence of such restraints, the conquest of nature without could enslave us to unruly nature within.
Even Francis Bacon, perhaps the greatest proponent of the marriage of science and politics, understood that the novelty sought by the former was not always congenial to the stability required by the latter. Bacon’s image of the best community, presented in his New Atlantis, does indeed award a central place to Baconian science: the jewel and lantern of the kingdom is a prodigious, state-supported, scientific research foundation, called Solomon’s House or the College of the Six Days Works (which, by the way, artfully creates new species through genetic manipulation). But the community is not enlightened. The populace has little access to the scientific goings-on, and the scientists practice self-censorship to avoid publicizing dangerous knowledge. A benevolent state, with the help of or perhaps under the direction of the scientists, apparently closely regulates the lives of its inhabitants by means of austere rituals, state-supported (albeit tolerant) religion, and—one suspects—perhaps even some scientifically-based means of behavior modification. According to Bacon, the mixture of science and politics, though desirable and even urgent, was potentially explosive and needed delicate handling.
In contrast, some of the Enlightenment thinkers of the 18th century and their descendants were much more sanguine about the easy compatibility of science and society. The most optimistic ones prophesied an unlimited and coupled progress of science and morality: the progress of science and technology would conquer necessity and alleviate human misery, and man thus emancipated from nature’s harsh and cruel necessities would flower morally into the good creature only his neediness prevents him from being. Man once liberated and enlightened, the external restraints imposed on him by law, mores, and religion would eventually become unnecessary. In the end, the state would wither away; politics, the rule over men, would be replaced by administration, the management of things. Our various species of Marxism are the lineal descendants of this messianic view of human perfectibility based on progress in the arts and sciences.
To summarize, whereas pre-modern political thinkers and statesmen placed their trust in law and morals, and doubted the ethical and social benefits of inquiry, modern science, devoted to the public good, found a political home and able defenders in modern, liberal regimes. Nevertheless, the proper balance and relation between science-technology and law or morals, between change and stability, remained an open question.
The founders of the American republic, though influenced by optimistic Enlightenment thought, were hardly utopians; they pursued a middle course. They knew human nature well enough not to underestimate the crucial importance of good laws, education, and also religion for the preservation of decency and public-spiritedness. But they also appreciated fully the promise of science. The American republic is, to my knowledge, the first regime explicitly to embrace scientific and technical progress and officially to claim its importance for the public good. The United States Constitution, which is silent on education and morality, speaks up about scientific progress. It does so in the course of defining the powers of Congress (Article I, Sec. 8):
The Congress shall have Power . . . To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
“To promote the Progress of Science and the useful Arts.” It is curious that this provision has come to be known as the Copyright and Patent Provision rather than the Provision on Progress in the Sciences and Useful Arts; for such progress is the explicit goal and purpose of the congressional power to enact the copyright and patent laws. These statutes, which we think of largely as protecting so-called intellectual property, were in the first instance thought of as useful to scientific and technical progress.
But progress was not itself the final end. Congress was given power to promote the useful arts, not the useless ones (e.g., the liberal arts or the fine arts). In the Federalist (No. 43), Madison speaks of the unquestionable utility of this congressional power to promote progress, and the context suggests that by its utility he means its usefulness to the public good.1 From this we infer that the useful arts and sciences were meant to be subordinated to, and in the service of, the well-being of the nation. Not progress for progress’s sake, but progress that might serve the enduring and unchanging goals set forth in the Preamble to the Constitution, among them, to provide for the common defense and promote the general welfare, and thus, indirectly, to establish justice, to insure domestic tranquility, and to secure the blessings of liberty to ourselves and our posterity. The American republic embraces change, but in the service of duration; science, but in the service of liberty and justice, defined by law. In this respect the Copyright and Patent Provision is perhaps only the most obvious example of the American way. For the entire Constitution is a deliberate embodiment of balanced tensions between science and law and between stability and novelty, inasmuch as the Founders self-consciously sought to institutionalize the improvements of the “science of politics,” and in such a way that would stably perpetuate openness to future change.
How best to promote the arts and sciences? How to induce talented men to behave for the common good? The Constitution once again makes a clear and measured choice: private enterprise, governed and protected by law. Other possibilities were considered by the Convention. Madison had proposed, among the powers of Congress, “To establish a university” and “To encourage by premiums and provisions, the advancement of useful knowledge and discoveries.” Yet the Convention rejected the establishment of a national university and the federal support of science through prizes and provisions, and adopted, apparently without debate, the provision which encourages progress by adding the fuel of interest to the fire of genius.
This reliance on self-interest and the motive of gain might be attributable to the Founders’ hard-headed appraisal of the selfish tendencies of most human beings; and cynics have sometimes attributed such motives to the Founders themselves. But a careful look at the constitutional text indicates that the patent provision is a matter not only of calculation but also of justice. Congress is empowered to secure, that is, to make safe and protect, a right of authors and inventors to the fruits of their genius and energy, a right which, by implication, antedates the Constitution. Indeed, this “Right of Authors and Inventors to their respective Writings and Discoveries” is the first and only right mentioned in the body of the original Constitution of 1787 (that is, before the Amendments and Bill of Rights). To quote Madison: “The copyright of authors has been solemnly adjudged in Great Britain to be a right of common law. The right to useful inventions seems with equal reason to belong to the inventors.”
There is justice, then, in the claims of copyright and patent. To be sure, doing justice will be complicated if the patent prize is awarded only for finishing first in a race in which the winner ran only the last leg of a long relay, tens or hundreds having assisted him. Nevertheless, everyone sees the at least prima facie claim that justice requires protecting the labors of the imaginative and industrious against theft by the sly and lazy. If theft of property is wrong, the right of patent is right, at least in some sense. The foundation of the patent law is not only utilitarian, but also ethical.
Indeed, it is ethical also in its consequences for character. The law not only protects individual rights and prevents injustice; it also rewards and encourages the energetic cultivation of the mind and the intellectual virtues of inventiveness, order, and precision, and promotes in publicly beneficial ways the moral virtues of ambition and industry. These likely consequences were in fact very important to many of the Founders, and their decision to fuel private enterprise was partly based on these hoped-for improvements in character and mind. To be sure, the mind has other and higher objects than inventions, and ambition and industry do not exhaust the moral virtues. Still, a respect for the human mind and an appreciation of efforts to realize its potential are built into our constituting law. One errs to see here only greed and base calculation.
Patent laws serve the public interest at the same time as they protect private rights. The community gains publication, likely development of inventions, a share in the resulting prosperity, and, should it desire it, some legislative hand on the throttle of progress. The patent laws of 1790, enacted by the first Congress, thus established what can rightly be called an ethical-social contract of science in the public interest. In order to secure their rights, authors and inventors had to disclose, that is, make gift to the public of their findings: no protection without publication. (In choosing to promote the widest possible publication, the Founders showed less concern than Bacon for the problem of dangerous knowledge, a matter to which we shall return.) Moreover, the exclusive right was obtained for only a limited period, to encourage prompt development and production of new inventions; thus, society might reap the benefits of innovation more quickly than if the right were of unlimited duration. All in all, the Copyright and Patent Provision and the patent law are most ingenious, public-spirited, and just inventions—themselves worthy of patent protection. Madison praised the former, saying, “The public good fully coincides with the claims of individuals.” Abraham Lincoln (in his lecture on “Discoveries and Inventions,” February 11, 1859) listed the latter, along with the arts of writing and printing and the discovery of America, among those few inventions and discoveries in the history of the world, most valuable “on account of their great efficiency in facilitating all other inventions and discoveries.”
Time has vindicated these judgments. We are showered on all sides by countless benefits of this far-sighted invention of the American mind, which harnessed science and artful intelligence to the carriage of state and which kept it moving by means of the carrot of self-interest. It would seem hypocritical and, what is worse, ungrateful, to question this arrangement, all the more so in the light of the marvelous contributions to our health and prosperity that we can now obviously expect from the industrial exploitation of learning how to get microorganisms to do our manufacturing.
And yet, honesty compels us to point out certain peculiarities of this arrangement, peculiarities that might eventually give rise to serious difficulties, not only for the union of science and the American polity, but also for each of the partners taken separately. First, it should be observed that the contract formed by the patent law brings together, in stressful if fertile union, certain contradictory, or at least inhospitable, partners and principles: self-interest and common good; monopoly and liberty; the ownership of ideas and the sharability or publicity of speech and thought. The patent law seeks to promote the common good by licensing private interest, thus running the risk of fostering a crass selfishness that in any particular instance might sacrifice public interest to private gain and that eventually renders men generally indifferent, or even hostile, to the common good. It seeks indirectly, by means of progress and prosperity, to safeguard political liberty, but it does so by legitimating monopoly—albeit of limited duration—which is the antithesis of liberty. It rewards publication and, therefore, presupposes the sharability of thoughts and ideas, yet it does so by licensing the private ownership of these works of the mind.
Second, there is the already noted built-in tension between progress and stability. Indeed, the very idea of a patent law is something of an oxymoron: it is a hybrid of two opposing principles, change and order, that live always in tension with each other. Law as law stands for order and stability. It not only sets limits and restrains undesirable conduct. It also embodies our opinions, albeit our variable opinions, about what is just and good. Though subject to change, law as such points to what is permanent. A law to encourage progress is thus, at bottom, a paradoxical law. In a way, though it promotes change, as an expression of legitimacy the patent law still, at least formally, accords primacy to order. Absent such a law, innovation would lack legal protection and even legitimacy. Thus, the supremely ingenious invention of the patent law could not itself be patentable, due to an absence of law.
In principle, the Constitution goes further than this formal subordination. The constitutional Patent Provision, we have suggested, maintains a balance by subordinating progress to the unchanging, substantive goals of justice and liberty. But in practice, the patent law threatens to tip the scale in favor of runaway change. Increasingly encouraged, the horses of technological progress break into full gallop, seemingly out of anyone’s control, and the community is left with the difficult task of adjusting after the fact to the paths traveled and the changes wrought. Sometimes, when progress comes before the bar—as in the present case—even learned men judicially charged with upholding the law choose instead injudiciously to redefine it, in order to keep pace with novelty.
Finally, there are potential strains in the American polity’s contract with science, insofar as the polity accepts without reservations the methods, principles, and purposes of modern natural science. For example, the practice of experimentation, when extended to human subjects, often places science on a collision course with the rights of individuals. Worse yet, our fundamental political principles, the natural rights enunciated in the Declaration of Independence, acquire no support from the “nature” described by the laws of physics and chemistry. The “nature” of the physicists, to say the least, offers no ground for rights, let alone for the belief that we have these rights as endowments from our Creator. Further, in biology, the teachings of evolution seem to deny to human beings any special place in the whole. And when, encouraged by these teachings, the project to relieve man’s estate through mastery and possession of nature approaches making fundamental alterations in human nature itself, Americans—everyone—must begin to wonder whether the goals and presuppositions of the entire venture are sound and even whether modern science’s notions of knowledge and nature are simply and unqualifiedly true.
Curiously, the recent Supreme Court decision in the Chakrabarty case points up all these difficulties, notwithstanding the narrow question it decided and the limited character of its holding. Various commentators have raised broader questions about the meaning of the Chakrabarty decision and its consequences: questions about the desirability of genetic engineering, about the dangers of the further commercialization of science, and about the propriety of owning an entire living species. By examining each of these questions, we shall be led to discover some of the limitations of the contract between modern science and the American polity, as it is embodied in our patent law.
First: does the protection of private rights and interests in new discoveries and inventions always serve the public good? Is the awarding of patents always in the public interest? The answer to these questions necessarily turns, in any given case, on the nature of the particular discovery and invention. More generally, it turns on the question, Is progress or technical innovation always in the public interest? If the innovations are simply or largely beneficial, their encouragement through award of patents would still reflect harmony between private interest and common good. But what about dangerous discoveries and inventions? Does the community serve its best interests when it stimulates their development through patent grants? Might not even the publication of the existence of the dangerous invention prove harmful to the public interest? What is the American polity’s remedy for this problem of dangerous innovation?
Genetic engineering is regarded by many as just such a dangerous technology, and one posing no ordinary dangers. For in human genetic engineering, the previous beneficiary of the power to alter nature becomes himself subject to that power and those alterations. The power to engineer the engineer sharply raises questions about the meaning and limits of progress.
It was, I am sure, concerns about the dangers of genetic engineering, especially human genetic engineering, that gave the Chakrabarty case such wide interest. In argument before the Supreme Court, grave risks allegedly associated with genetic manipulation were cited as a reason why patent should be denied. The majority opinion states:
We are told that genetic research and related technological developments may spread pollution and disease, that it may result in a loss of genetic diversity, and that its practice may tend to depreciate the value of human life.
These opinions were advanced and are held by reputable scientists, among others, whose concerns range from fears about new biohazards to doubts about our possessing the wisdom requisite to redesign human genes or to interfere designedly in the course of evolution. There is, to be sure, much disagreement about the degree to which these fears and doubts are warranted, but there is no doubt that the matters at stake are serious. The Court indeed acknowledged the seriousness of such considerations, but held them nonetheless irrelevant to its decision, partly on the ground that its negative decision would not prevent such research, partly on the ground that it lacked the competence or the constitutional authority to decide how much and what kind of genetic research our society should foster.
The Court’s judgment seems to me to be sound. Under our Constitution, it is for the legislature to decide such questions, and the Courts ought not to rewrite the rules. Further, denial of individual patent applications seems a poor way for society to decide questions about allegedly dangerous research and technology. Yet this very fact calls attention to a defect in the relation between science and society, insofar as that relation is largely defined by or exemplified in the contract of the patent laws. The patent laws assume that innovations proposed by inventors are, because innovative and useful to some, simply good for the community at large. Instituted well before many people recognized the communal price everyone pays for certain kinds of technological change, they reflect a once little-questioned faith in progress. Thus, as they are instruments for encouraging innovation, they are poorly designed for regulating or controlling it. It is no surprise that the mechanism for making the individual horses run turns out to be incapable of slowing them down, should one later discover that, as a team, they are in danger of running away with the rider.
And yet one wonders. The Court says, “Whether respondent’s claims are patentable may determine whether research efforts are accelerated by the hope of reward or slowed by the want of incentives, but that is all” (emphasis added). But that “all” is not nothing. True, something unpatentable could still be legal and profitable; one cannot assume that lack of a patent will prevent development. Nevertheless, the awarding of patents is a communal hand on the throttle, a gentle hand to be sure, but by no means ineffective. Further, it is, as it happens, a hand less threatening to science than the legislative power to prohibit and make illegal. Moreover, one might argue that, in the statutory criterion of utility, the Patent Office has been given the power, indeed the duty, to judge the social merits of a given invention in deciding whether to encourage its development. According to the patent laws, only useful inventions may be patented, and rightly so, if some usefulness to the public good is society’s share of the patenting contract. Though it is generally sound to believe that fueling private incentives serves the public good, allowing the market to decide “usefulness,” this is notoriously not always the case (especially if by “public good” we mean more than economic growth).
How does the Patent Office understand “the useful”? In general, its presumption being to favor development, any definable “use” is sufficient. But is this always sound? How should it judge the usefulness of a manufacture that has obvious and likely misuses and abuses, along with some clear and well-defined use? For example, how should it judge the usefulness of a perfected pleasure drug, admittedly beneficial in the treatment of depression, but almost certainly subject to widespread social or political abuse? What about improved devices for subliminal advertising? Or new and improved miniature recording and photographic devices that would no doubt increase snooping and invasions of privacy? Should the inventor of selective spermicides and his financial backers be able to decide by patenting that our society should be able to practice sex-selection of offspring?
One would think that a well-developed and nuanced doctrine of “utility” might already be embodied in court decisions involving patent claims. But a brief survey of the legal literature shows otherwise. True, precedent denies patentable “utility” to inventions whose contemplated use is for purposes deemed illegal or immoral (bogus coin detectors for slot machines, for example) or which always cause bodily harm to the user when used in the intended manner (for example, a drug effective against depression but toxic to the point of lethality). “A composition unsafe for use by reason of extreme toxicity to point of immediate death under all conditions of its sole contemplated use in treating disease of human organisms would not be ‘useful’ within the meaning of patent laws” (emphasis added) is the limited, almost grudging, concession to such considerations made by the United States Court of Customs and Patent Appeals, in a well-known case, Application of Anthony. In that case (1969), the Court in fact argued that, short of such uniform catastrophe, safety as an ingredient of utility is a relative matter, and overruled the U.S. Patent Office which had denied patent for an anti-depressant drug, Monase, a drug voluntarily taken off the commercial market by its manufacturers because of a dozen fatalities reported among its many users.
Commenting in a footnote on the more general question of social harm from inventions capable of affecting public morals, health, and order, the Court in Anthony endorses a turn-of-the-century U.S. Circuit Court opinion (Fuller v. Berger et al.): an invention is “useful within the meaning of the law, if it is used (or is designed and adapted to be used) to accomplish a good result, though in fact it is oftener used (or is as well or even better adapted to be used) to accomplish a bad one.”2
During the 1960’s this doctrine—that likely abuse does not negate use—caused some embarrassment to the Patent Office, indeed, as a consequence of its function as publicist. A patent had been earlier awarded for LSD, shortly before its hallucinogenic properties were known. When the drug found its way into street use, the Patent Office helped a whole generation learn how to manufacture it, being obliged to divulge the details of its chemical synthesis to anyone who requested them. The Patent Office did so until the supply of printed matter about LSD was exhausted.
Perhaps such precedents reflect our long-standing and naive belief in the beneficence, or at least the innocence, of all innovation. Would a similar court today allow a patent for Monase, for the Colt revolver, or for LSD? Perhaps the future might bring us a more complex and refined doctrine of utility, one which was willing to make balancing judgments in protecting the public’s side of the contract. But, at least for now, it seems that any licit and non-lethal “use” suffices for the statutory test of “utility,” all likely abuses notwithstanding. Under these circumstances, our second thoughts confirm our first: the Court in Chakrabarty was right in not allowing concerns about the possible dangers of genetic engineering to influence its decision.
If patent decisions do not and cannot consider these broad questions of use and possible abuse, if restriction of patents is an inappropriate mechanism for setting the pace in the realm of potentially dangerous technologies, the contract between science and society needs additional clauses. To be sure, many already exist—e.g., Regulations of the Food and Drug Administration, Guidelines for the Use of Human Subjects in Research, etc. Yet most of these regulations deal only with questions of health and safety. We have few means of assessing and regulating with regard to the massive consequences of new technologies to our mores, institutions, and ways of life. With the vast powers, now being accumulated, that would bring the mastery of nature to bear on human nature itself, some have begun to wonder whether the simply permissive contract between innovators and society needs to be renegotiated.
Such a response seems to me excessive. We have, and will continue to have, a commitment to scientific and technological progress. We have reason to expect that the social and political results of such progress will continue to be largely beneficial, and that the union of science and politics cemented by the patent laws will continue to serve us well. It would be foolish to dismantle our instruments of progress just because they require some additional devices and mechanisms. It would be foolish to shackle our accelerator just because it does not function as a brake. The difficult question—one which we have only begun to face—is what kinds of political arrangements and institutions are best suited to reviewing the direction and pace of certain “dangerous” developments and to applying the brakes, if necessary. One thing seems clear: the responsibility lies with the legislature. Courts may raise questions about the need for brakes, but it must be Congress that applies them. How to do so is, of course, the difficult question. The task of inventing suitable braking mechanisms will require even more ingenuity than the invention of the patent laws. We are all aware of the serious risks and costs of governmental regulation. Yet unless some means of control are found for those technologies reasonably regarded as potentially dangerous to the public interest— and, for the long run, who can be certain about genetic engineering?—the motives of gain, when added to ingenuity and stimulated by patent protection, are likely to subvert the common good. With big money fanning the flames—consider the difficulties in regulating the tobacco or automobile industries—the fire of innovation could be out of control before anyone gets warm enough to worry.
We have argued that the job of brakeman does not belong to the Courts. But it does not follow that the Courts should be free to remove or revise brakes applied by the legislature. The Court should be neither the partisan nor the opponent of progress; it is, instead, the guardian of law and, implicitly, a teacher of law-abidingness. The Court in Chakrabarty rightly resisted encroaching upon the legislative domain in refusing to become society’s arbiter regarding genetic engineering; but how well did it discharge its own task of guarding the law? An examination of the decision reveals that the Court showed itself partial to progress, with the so-called conservative members leading the way.
The Court was asked to decide not whether living organisms ought to be, but only whether they are patentable matter, as this is defined by statute. The relevant portion of the patent law (32 United States Code §101) provides:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the condition and requirements of this title.
To decide affirmatively, the Court majority had to construe both the novel microorganism and the operative clause in the statute, which defines patentable matter as “any new and useful process, machine, manufacture, or composition of matter,” such that a living bacterium could be understood to be either a “manufacture” or a “composition of matter” or both; the Court minority argued that “manufacture” or “composition of matter” were not intended by Congress to encompass living organisms. Though the majority opinion does not directly argue that the microorganism in question is, say, a composition—i.e., a putting together—of matter, it treats its aliveness as irrelevant to its patentability. It ignores altogether the nature of the object, arguing: “In choosing such expansive terms as ‘manufacture’ and ‘composition of matter,’ modified by the comprehensive ‘any,’ Congress plainly contemplated that the patent laws would be given wide scope.” Finally, the Court argues that novelty, utility, and the fact that Chakrabarty’s discovery “is not nature’s handiwork but his own” render the bacterium patentable subject matter and Chakrabarty and General Electric the proud owners of “his” new species.
I happen to think the Court opinion mistaken in its reading of the statute. The terms “manufacture” and “composition of matter” go back to Jefferson’s 1793 patent law, and Congress has retained them without change in all subsequent revisions. (In another category of patentable matter, when Congress became dissatisfied with Jefferson’s concepts, it replaced his term: the present “process” is a replacement for Jefferson’s “art.”) Did Jefferson regard a living organism as a mere “composition of matter”? Certainly, in the ordinary sense of these terms, no one should. The majority goes too far in extrapolating from its correct belief that Congress “contemplated that the patent laws would be given wide scope.” It sustains the opinion that Congress intends statutory subject matter to “include anything under the sun made by man.” But if so, why did Congress in fact make and preserve categorical distinctions among the kinds of patentable man-made things—“processes, machines, manufactures, and compositions of matter”—distinctions that would be unnecessary if “anything under the sun,” so long as of artificial origin, were the sufficient mark of patentable matter—of course, along with novelty, utility, and non-obviousness? And why, the minority rightly asks, would Congress enact separate plant patent laws (in 1930 and again in 1970) to permit patenting of new plant varieties, if Congress understood “manufacture” and “composition of matter” as broadly as the Court majority now claims? Indeed, as the minority again points out, in the 1970 Plant Patent Act Congress had specifically excluded bacteria from patentability under the Act: “Congress has included bacteria within the focus of its legislative concern, but not within the scope of patent protection. . . . Congress, assuming that animate objects as to which it had not specifically legislated could not be patented, excluded bacteria from the set of patentable organisms.”
The Court majority ignored these specific facts about the written statutes. It took its stand instead on what it calls the “broad general language” of the patent laws and on its own construction of the legislative intent: “The subject-matter provisions of the patent law have been cast in broad terms to fulfill the constitutional and statutory goal of promoting ‘the Progress of Science and useful Arts’ with all that means for the social and economic benefits envisioned by Jefferson.” It is an insult to Jefferson to suggest that his friendship for progress made him imprecise and vague as a legislator. He said what he meant and he meant what he said, always careful about his choice of words. Courts would do less mischief if they treated all law and legislatures as if they meant what they in fact explicitly said. The present Court’s love of innovation extends to its reading of law. We must wonder whether such “progressive” jurisprudence is not too high a price for progress.
If patenting and the patent laws do not always serve the public or political good, are they simply good for science? As we shall see, this is best understood as a special case of the question, Is practice always good for theory?, and it ultimately invites us to reconsider the purposes of science and of thought more generally. But nearer at hand are questions about science and money.
The Chakrabarty decision has prompted discussion about possible corrupting tendencies of the profit motive, not so much for the society at large, but, curiously, for the present practice of biological science, especially in universities. For roughly a quarter-century, biomedical research has flourished, largely funded by the federal government and philanthropic foundations, much of it done in universities. Though ultimately interested in the practical benefits, the government, albeit not without frequent prodding by basic scientists, has wisely and patiently supported many outstanding minds in so-called basic research, largely without regard to its immediate utility. Progress over the past three decades is simply staggering. Though competition is keen, and there are well-known cases of secretive and even unscrupulous behavior, on the whole the field has thrived on free and cooperative exchange of information and materials, including strains of microorganisms.
Now that new discoveries and techniques in cell biology and molecular genetics have brought these fields fully into the industrial area, many are worried that the profit motive will distort, not to say corrupt, scientific practices. Concerns are expressed for the effects on the behavior of scientists, on the balance in fields of research, and on universities. Warnings are heard about an impending restriction of the free flow of information and a rise of secretiveness, deception, and other unsavory conduct, not excluding espionage. Others are concerned that profits will dictate the direction of scientific research, deflecting the scientific mind from going where it will or should. With several universities, under threat of rising costs and dwindling financial support, already established entrepreneurs in genetic technology, and many fine scientists entering industry in a variety of capacities, often retaining their academic tenure, there is argument that such goings-on in principle violate the spirit and will in practice threaten the purpose of the university.
These are serious and complicated questions which cannot be addressed adequately here. On some matters the concerns seem exaggerated. The rise of industrial chemistry and applied physics has not in itself, it seems to me, corrupted basic research in universities, nor led to undue secrecy or unsavory practices. And in any case, such problems as appear are due more to the large amounts of money involved than to patenting (though the two are not unrelated). For the need to protect profitable discoveries through patent should lead not to secrecy, but to publication (though the anticipation of future patent application does lead many into temporary secrecy, and reports are now increasing of biologists who, looking to protect future patents, have become silent and stingy with new information and materials). Once a patent is granted, for a payment of royalties new information, materials, and techniques are potentially more widely sharable. Moreover, the disdain of many academic biologists for the practical applications of their work can only be regarded as hypocritical, especially considering the hopes of, and their promises to, their public patrons. Academic scientists have for years played upon the public’s utilitarian concerns and always promised and even emphasized the probable long-run practical benefits when seeking congressional support to satisfy their own private curiosity. Science, even university science, is, to some extent, a kept woman, and the question sometimes seems to be only who shall keep her and what is her price. Her virtue and her fruitfulness may not suffer further from wedding herself to industry.
But this is no matter for levity; the stakes are very high. There is reason to be concerned about the growth of the academic-industrial complex, but not because industry is corrupt or corrupting or because there is something reprehensible about utility or even money-making. Rather, one is concerned because one knows that universities exist not only to generate useful discoveries and because one suspects that knowledge for the sake of power and utility is not the whole truth about knowledge, that thought at its best—including scientific thought—seeks truth for its own sake. For these reasons, we can ill afford to be indifferent to the fate and character of university science and to the climate for free and fundamental thought. The remarkable record of American scientists in basic discovery in biology is a credit to public support and especially to the university setting, with its great freedom of inquiry and its relative immunity to demands for prompt success or useful results. Here, fundamental thought is frequently stimulated by the collegiality of scholars in diverse areas of inquiry, scholars who are also teachers, somehow still heirs to a great tradition that often gave more than lip-service to the disinterested pursuit of the truth. Professors are often pushed to fundamentals also by their undergraduate students, who are not yet sufficiently “educated” to know that there are some questions one should avoid asking. One wonders how theory will fare if the universities are increasingly drawn to practice. One wonders whether the search for the truth will flourish, should the universities and their scientists try to be increasingly relevant and useful.
Though largely unanticipated by the American Founders, the rise of universities and of science within them has added a new dimension to the original relation between science and politics, a dimension that acquisitive, democratic, and egalitarian regimes very much require. The point was made brilliantly by Tocqueville, in his Democracy in America, in the chapter, “Why the Americans are More Concerned with the Applications than with the Theory of Science”:
The higher sciences or the higher parts of all sciences require meditation above everything else. But nothing is less conducive to meditation than the setup of democratic society. . . . Everyone is on the move, some in quest of power, others of gain. In the midst of this universal tumult, this incessant conflict of jarring interests, this endless chase for wealth, where is one to find the calm for the profound researches of the intellect? How can the mind dwell on any single subject when all around is on the move and when one is himself swept and buffeted along by the whirling current which carries all before it?
Not only is meditation difficult for men in democracies, but they naturally attach little importance to it. . . . In democratic countries when almost everyone is engaged in active life, the darting speed of a quick, superficial mind is at a premium, while slow deep thought is excessively undervalued. . . .
Most of the people in these [democratic] nations are extremely eager in the pursuit of immediate material pleasures and are always discontented with the position they occupy and always free to leave it. They think about nothing but ways of changing their lot and bettering it. For people in this frame of mind every new way of getting wealth more quickly, every machine which lessens work, every means of diminishing the costs of production, every invention which makes pleasures easier or greater, seems the most magnificent accomplishment of the human mind. It is chiefly from this line of approach that democratic peoples come to study sciences, to understand them, and to value them. In aristocratic ages the chief function of science is to give pleasure to the mind, but in democratic ages to the body.
. . . It is easy to see how, in a society organized on these lines, men’s minds are unconsciously led to neglect theory and devote an unparalleled amount of energy to the applications of science. . . .
On the strength of this analysis, Tocqueville gives this advice:
If those who are called on to direct the affairs of nations in our time can clearly and in good time understand these new tendencies which will soon be irresistible, they will see that, granted enlightenment and liberty, people living in a democratic age are quite certain to bring the industrial side of science to perfection anyhow and that henceforth the whole energy of organized society should be directed to the support of higher studies and the fostering of a passion for pure science.
Nowadays the need is to keep men interested in theory. They will look after the practical side of things for themselves. So, instead of perpetually concentrating attention on the minute examination of secondary effects, it is good to distract it therefrom sometimes and lift it to the contemplation of first causes. . . .
We therefore should not console ourselves by thinking that the barbarians are still a long way off. Some people may let the torch be snatched from their hands, but others stamp it out themselves.
I do not wish to exaggerate the dangers to pure science or to universities from the new privilege to patent microorganisms, hybridomas, and products of genetic engineering. Nor, unfortunately, are universities or academic scientists today the embodiment of thoughtfulness and disinterested inquiry that Tocqueville rightly argues we so urgently need. But the climate is not being helped by the eruption among scientists and administrators of what must frankly be called greed, nor is it likely to be improved by the continuing growth of the academic-industrial complex. When the president of Harvard University devotes his entire annual address to his Board of Overseers to the theme of “technology transfer”—the translation of scientific knowledge into useful products and processes—and argues that it must become a central task of the university, one has reason to believe that big winds may soon blow the academy off its present course.
American universities are, for all their faults, precious and precarious institutions. In fact, the present balance within them between the busy and the deliberate, the clever and the wise, the useful and the true, is already tipped so far toward the former that we must be cautious about all further changes that tend to diminish the latter. It should now be evident that my concern for universities and for theory and fundamental thought goes beyond my concern for so-called pure science. The earlier discussion should have made clear the importance of careful and thorough thinking about the relation between science and the American polity and about the implications of our new forays into genetic engineering. Indeed, especially now, when the goal and direction of the scientific project for the mastery of nature seem less clear than ever, and when, despite this confusion about the end, the means are being amassed to affect directly and deliberately all forms of life on the planet, we stand in urgent need of the far-seeking and high-minded reflection about science, ethics, and society which the patent laws, industry, and even such fine institutions as the National Institutes of Health cannot encourage or foster.
But theory is urgent not only because basic research pays dividends in applications, nor even because we need theory to think about whither we are tending. Theory is urgent also because it is in itself elevating and liberating. Thoughtfulness, speculation, genuine inquiry beyond mere problem-solving, philosophical reflection on our condition and our place in the world, in short, liberal learning and liberal education—and not only the advancement of Baconian learning—are necessary for a truly free people. Liberty, secured by the progress of science and useful arts, would be little blessed if our minds become enslaved in and to the process of serving our bodies.
Once again, the task is to restore the balance, to give weight to the weaker side. And once again, it is difficult to see how and by whom the countervailing forces for liberal learning and philosophic reflection are to be generated and supported, especially now when economic troubles aggravate the natural tendency of modern thought to serve utility.
The task is beyond the competence both of our science, not least because of its anti-speculative self-definition, and of our law. No one would say that the practice or encouragement of philosophical reflection is the business of our courts. But, at the same time, it is sad when the Supreme Court, the closest approximation in the American polity to the rule of thoughtful reason, promulgates ill-considered opinions about weighty matters. For in justifying its decisions, the Court functions also as a teacher, helping to form what become our ruling opinions. Indeed, the opinions of the Court are often more important for what they teach than for what they decide. We take one last look at the Chakrabarty case, with a view to the Court as teacher.
What has the Chakrabarty decision accomplished? A rather modest gain for Chakrabarty, a rather sizable boost for the burgeoning hybridoma and genetic-technology industry, but—by means of negative example—a most important lesson, if only we can learn it, about how close we have come in our thinking, if not yet in our practice, to overstepping the sensible limits of the project for mastery and possession of nature. This project makes sense only if we fully understand and accept the limited meanings of “mastery” and “possession” and only if we appreciate the nature of living nature and our place within it. On these deep matters, the Court was here a teacher of shallowness.
Consider first the implicit teaching of our wise men, that a living organism is no more than a composition of matter, no different from the latest perfume or insecticide. What about other living organisms—goldfish, bald eagles, horses? What about human beings? Just compositions of matter? Here arise deep philosophical questions to which the Court has given little thought; but in its eagerness to serve innovation, it has, perhaps unwittingly, become the teacher of philosophical materialism—the view that all forms are but accidents of underlying matter, that matter is what truly is— and therewith, the teacher also of the homogeneity of the given world, and, at least in principle, of the absence of any special dignity in all of living nature, our own included.
A similar teaching is also implicit in the enlargement of the sphere of what may be owned and possessed. By the arguments of the Court, it now seems that anything under the sun made of tangible stuff falls under “composition of matter,” and is therefore patentable, so long as its origin is in human art. Nothing in the Court’s opinion would permit one to argue that the “inventor” of the mule, were the mule to be a new invention, could not claim patentability. If the Chinese succeed in their present attempts with artificial insemination to cross-breed a human being with a chimpanzee, producing the novel and useful “humanzee,” it would be arguably patentable matter—if the Court sticks to its interpretation and Congress does not act. These examples may be farfetched but they serve to illustrate the point: there is something obviously and immediately disquieting about the human ownership of an entire living species, even one brought into being with the partial aid of art.
This bizarre new prospect, that one man could own—albeit for a limited time—an entire species, does indeed invite us to rethink the reasons why we permit ownership of any animals. There is a sense in which the former is but the logical extension of the latter, both instances of the possession and exploitation of living nature for human needs and wants, and this logical extension as limiting case might in fact illuminate problematic aspects of our age-old and familiar practice of domesticating plants and animals. Still, there are significant differences which, though they do not fully explain our repugnance at the notion of owning a species, suggest that our disquiet is not due just to the novelty and audacity of the idea.
If usefulness justifies ownership, it also defines its justifiable limits. Ownership of animals, even of large herds, presupposes the usefulness of each animal to the owner. Even when animals are kept for their beauty or companionship, possession is reasonable only on a human scale, that is, on a scale that permits individual appreciation or relation. We do not endorse possession for the sake of possession: the thought of a man buying up and collecting all the world’s camels or giraffes or horses is repulsive, though nothing in the law prevents it. To own more of living nature than what one needs for one’s own life and livelihood is hard to justify. Even harder is it to justify such monopoly when the sole purpose is to exclude others from similar benefits.
Ownership also carries with it responsibility, not only for the living beings but also to other human beings for what the animals inadvertently do. Indeed, living things, unlike true artifacts, have a life of their own and ways that we cannot simply predict or control:
And if one man’s ox hurt another’s, so that it dieth; then they shall sell the live ox, and divide the price of it; and the dead also they shall divide. Or if it be known that the ox was wont to gore in time past, and its owner hath not kept it in; he shall surely pay ox for ox, and the dead beast shall be his own. (Exodus 21: 35-36)
Can one exercise responsibility for an entire species, especially species that reproduce prodigiously and are hard to confine? If one of Chakrabarty’s bacteria escaped from his laboratory, can he be held responsible for the mischief it causes? If Chakrabarty’s bacteria find their way into an oil well or an oil-storage tank, shall he pay drop for drop? For they were wont to gore in time past and the owner hath not kept them in. And (while thinking about fugitive bacteria) if one of Chakrabarty’s technicians going on vacation inadvertently carries—on his skin or clothing or in his digestive tract—one of the microbes from its laboratory confinement in Illinois to freedom in Missouri where it becomes fruitful and multiplies, must all the billions of progeny be returned to Illinois? Will the Supreme Court, in upholding Chakrabarty’s patent claims of ownership, write a new Dred-Scott decision?
Be this as it may, the implicit teaching about ownership of life in the present Supreme Court decision is indeed problematic. It is one thing to own a mule; it is another to own mule.3 Admittedly, bacteria are far away from mules. But the principles invoked, the reasoning, and the stance toward nature go all the way to mules, and beyond.
What is the principled limit to this beginning extension of the domain of private ownership and dominion over living nature? Is it not clear, if life is a continuum, that there are no visible or clear limits, once we admit living species under the principle of ownership? The principle used in Chakrabarty says that there is nothing in the nature of a being, no, not even in the human patenter himself, that makes him immune to being patented: not what he is, but only the “accident” of his non-man-made origin renders man himself a non-patentable organism. If a genetically engineered organism may be owned because it was genetically engineered, what would we conclude about a genetically altered or engineered human being? To be sure, in general it makes sense to allow people to own what they have made, because they have artfully made it. But to respect art without respect for life is finally self-contradictory. For human art depends on the human artificer, whose inventive mind depends on his living body, not only to sustain it that he might practice its cleverness, but also because the ends of his artfulness emerge from the inner needs and aspirations of his embodied life.
Finally, the exalted and mastering status of human art claims too much and too little for itself. It claims too much because it ignores that art can only put together or alter what natural powers beyond human control will allow. In the present case, our inventor even had nature’s active assistance; for it is not strictly true, as the Court claims, that “his discovery is not nature’s handiwork, but his own.” Chakrabarty did not himself create the new bacterium. Rather, he played the matchmaker for a shotgun wedding and the selector of its progeny, while the living organisms did the work. He mixed together plasmids (carrying genes for metabolizing hydrocarbons) produced by and isolated from certain oil-degrading bacterial species and incubated them with the hardier Pseudomonas species, which bacteria all by themselves incorporated the plasmids. By selecting conditions that would support growth only of the plasmid-containing Pseudomonas hybrid, Chakrabarty obtained “his” novel strain. Though the process was—in many senses—“creative” and “his own,” the novel organism was not his creature.
Even in true compositions of matter, that is, when chemicals are placed together to produce a new mixture or compound, nature is commanded only as she is obeyed. The potentialities of given matter may be exploited, but they cannot be artfully created. The laws of nature permit prediction and control of phenomena, but they too are not of our making and cannot be transgressed. One might say, what Nature’s God keeps asunder, no man can put together. Man’s ability to change nature is, in principle and in practice, always consistent with and limited by nature’s unchanging ground.
Ironically, in its pride, human innovativeness also respects itself too little, because it lacks self-understanding. It fails to appreciate its source in the permanent power of mind, given to human beings but not of their own making. Our inventiveness is not our invention; neither are the truths it discovers.
The Court acknowledges that “Einstein could not patent his celebrated law that E=mc2; nor could Newton have patented the law of gravity.” The reason given is curious: “Such discoveries are manifestations of . . . nature, free to all men and reserved exclusively to none.” The Court fails to appreciate the deeper reason why a truth cannot be patented. Once it is published, it is sharable. To know it is to make it “your own.” But truth is “your own” in a very special way, unlike your other “possessions.” The greatest thinkers have understood that truths are neither private nor property, that they come unbidden to mind, mysteriously, and that insight is neither at one’s disposal nor of one’s own making. Homer, the greatest of the makers, assigns credit to the Muse. Finally, the claim of “intellectual property” is unfounded, even for “inventions.”
In the ever-changing being that is given to living organisms, the two poles of natural permanence—mobile matter and sensitive awareness, culminating in mind—are bound together. In human beings, living nature at last becomes conscious of itself. If we are sober in our practice and mindful in our thought, it is given to us human beings to learn our place in the natural whole and to discover something of its distinctive beauty and mysterious ground. Without such self-knowledge, the project for mastery and possession of nature is a Faustian bargain. Reacquiring a respect for our relatives, the ever-changing living forms, could regain for us a much needed recognition and appreciation of the natural and unchanging source of all change.
1 The Federalist's, explanation and defense of this provision, to which we shall refer again, comprises the following brief paragraph:
The utility of this power will scarcely be questioned. The copyright of authors has been solemnly adjudged in Great Britain to be a right of common law. The right to useful inventions seems with equal reason to belong to the inventors. The public good fully coincides in both cases with the claims of individuals. The States cannot separately make effectual provision for either of the cases, and most of them have anticipated the decision of this point by laws passed at the instance of Congress.
The fourth sentence seems to be a conclusion from the first three. Since the second and third deal with “claims of individuals,” we infer that the first considers “the public good.” It is worth noting that the extension of the common-law teaching on copyright to cover “the right to useful inventions” is treated here as an American innovation, albeit one that can be adjudged “with equal reason.”
2 The court opinion quotes at length from the still authoritative doctrine, formulated in his 1880's textbook of patent law by Albert Henry Walker, the court's own additions to his text being noted by parenthesis:
An important question, relevant to utility in this aspect, may hereafter arise and call for judicial decision. It is perhaps true, for example, that the invention of Colt's revolver was injurious to the morals, and injurious to the health, and injurious to the good order of society. That instrument of death may have been injurious to morals, in tending to tempt and to promote the gratification of private revenge. It may have been injurious to health, in that it is very liable to accidental discharge, and thereby to cause wounds, and even homicide. It may also have been injurious to good order, especially in the newer parts of the country, because it facilitates and increases private warfare among frontiersmen. On the other hand, the revolver, by furnishing a ready means of self-defense, may sometimes have promoted morals and health and good order. By what test is utility to be determined in such cases? Is it to be done by balancing the good functions with the evil functions? Or is everything useful within the meaning of the law, if it is used (or is designed and adapted to be used) to accomplish a good result, though in fact it is oftener used (or is as well, or even better, adapted to be used) to accomplish a bad one? Or is utility negatived by the mere fact that the thing in question is sometimes injurious to morals, or to health, or to good order? The third hypothesis cannot stand, because if it could, it would be fatal to patents for steam engines, dynamos, electric railroads, and indeed many of the noblest inventions of the nineteenth century. The first hypothesis cannot stand, because if it could, it would make the validity of the patents to depend on a question of fact to which it would often be impossible to give a reliable answer. The second, hypothesis is the only one which is consistent with the reason of the case, and with the practical construction which the courts have given to the statutory requirement of utility. (Emphasis added.)
Does the doctrine of utility enunciated in the emphasized passage truly serve the public interest? Are Walker's three options exhaustive? And is not even the first hypothesis, as stated, a plausible principle for at least those cases in which it would not be impossible to give a reliable answer to the balance between benefits and harms to the general welfare?
3 The argument should cause us to reconsider the wisdom of permitting ownership even of plant species, made possible by the plant patent laws of 1930 and 1970.
Must-Reads from Magazine
Can it be reversed?
Writing in these pages last year (“Illiberalism: The Worldwide Crisis,” July/August 2016), I described this surge of intemperate politics as a global phenomenon, a crisis of illiberalism stretching from France to the Philippines and from South Africa to Greece. Donald Trump and Bernie Sanders, I argued, were articulating American versions of this growing challenge to liberalism. By “liberalism,” I was referring not to the left or center-left but to the philosophy of individual rights, free enterprise, checks and balances, and cultural pluralism that forms the common ground of politics across the West.
Less a systematic ideology than a posture or sensibility, the new illiberalism nevertheless has certain core planks. Chief among these are a conspiratorial account of world events; hostility to free trade and finance capital; opposition to immigration that goes beyond reasonable restrictions and bleeds into virulent nativism; impatience with norms and procedural niceties; a tendency toward populist leader-worship; and skepticism toward international treaties and institutions, such as NATO, that provide the scaffolding for the U.S.-led postwar order.
The new illiberals, I pointed out, all tend to admire established authoritarians to varying degrees. Trump, along with France’s Marine Le Pen and many others, looks to Vladimir Putin. For Sanders, it was Hugo Chavez’s Venezuela, where, the Vermont socialist said in 2011, “the American dream is more apt to be realized.” Even so, I argued, the crisis of illiberalism traces mainly to discontents internal to liberal democracies.
Trump’s election and his first eight months in office have confirmed the thrust of my predictions, if not all of the policy details. On the policy front, the new president has proved too undisciplined, his efforts too wild and haphazard, to reorient the U.S. government away from postwar liberal order.
The courts blunted the “Muslim ban.” The Trump administration has reaffirmed Washington’s commitment to defend treaty partners in Europe and East Asia. Trumpian grumbling about allies not paying their fair share—a fair point in Europe’s case, by the way—has amounted to just that. The president did pull the U.S. out of the Trans-Pacific Partnership, but even the ultra-establishmentarian Hillary Clinton went from supporting to opposing the pact once she figured out which way the Democratic winds were blowing. The North American Free Trade Agreement, which came into being nearly a quarter-century ago, does look shaky at the moment, but there is no reason to think that it won’t survive in some modified form.
Yet on the cultural front, the crisis of illiberalism continues to rage. If anything, it has intensified, as attested by the events surrounding the protest over a Robert E. Lee statue in Charlottesville, Virginia. The president refused to condemn unequivocally white nationalists who marched with swastikas and chanted “Jews will not replace us.” Trump even suggested there were “very fine people” among them, thus winking at the so-called alt-right as he had during the campaign. In the days that followed, much of the left rallied behind so-called antifa (“anti-fascist”) militants who make no secret of their allegiance to violent totalitarian ideologies at the other end of the political spectrum.
Disorder is the new American normal, then. Questions that appeared to have been settled—about the connection between economic and political liberty, the perils of conspiracism and romantic politics, America’s unique role on the world stage, and so on—are unsettled once more. Serious people wonder out loud whether liberal democracy is worth maintaining at all, with many of them concluding that it is not. The return of ideas that for good reason were buried in the last century threatens the decent political order that has made the U.S. an exceptionally free and prosperous civilization.F or many leftists, America’s commitment to liberty and equality before the law has always masked despotism and exploitation. This view long predated Trump’s rise, and if they didn’t subscribe to it themselves, too often mainstream Democrats and progressives treated its proponents—the likes of Noam Chomsky and Howard Zinn—as beloved and respectable, if slightly eccentric, relatives.
This cynical vision of the free society (as a conspiracy against the dispossessed) was a mainstay of Cold War–era debates about the relative merits of Western democracy and Communism. Soviet apologists insisted that Communist states couldn’t be expected to uphold “merely” formal rights when they had set out to shape a whole new kind of man. That required “breaking a few eggs,” in the words of the Stalinist interrogators in Arthur Koestler’s Darkness at Noon. Anyway, what good were free speech and due process to the coal miner, when under capitalism the whole social structure was rigged against him?
That line worked for a time, until the scale of Soviet tyranny became impossible to justify by anyone but its most abject apologists. It became obvious that “bourgeois justice,” however imperfect, was infinitely preferable to the Marxist alternative. With the Communist experiment discredited, and Western workers uninterested in staging world revolution, the illiberal left began shifting instead to questions of identity. In race-gender-sexuality theory and the identitarian “subaltern,” it found potent substitutes for dialectical materialism and the proletariat. We are still living with the consequences of this shift.
Although there were superficial resemblances, this new politics of identity differed from earlier civil-rights movements. Those earlier movements had sought a place at the American table for hitherto entirely or somewhat excluded groups: blacks, women, gays, the disabled, and so on. In doing so, they didn’t seek to overturn or radically reorganize the table. Instead, they reaffirmed the American Founding (think of Martin Luther King Jr.’s constant references to the Declaration of Independence). And these movements succeeded, owing to America’s tremendous capacity for absorbing social change.
Yet for the new identitarians, as for the Marxists before them, liberal-democratic order was systematically rigged against the downtrodden—now redefined along lines of race, gender, and sexuality, with social class quietly swept under the rug. America’s strides toward racial progress, not least the election and re-election of an African-American president, were dismissed. The U.S. still deserved condemnation because it fell short of perfect inclusion, limitless autonomy, and complete equality—conditions that no free society can achieve given the root fact of human nature. The accidentals had changed from the Marxist days, in other words, but the essentials remained the same.
In one sense, though, the identitarians went further. The old Marxists still claimed to stand on objectively accessible truth. Not so their successors. Following intellectual lodestars such as the gender theorist Judith Butler, the identity left came to reject objective truth—and with it, biological sex differences, aesthetic standards in art, the possibility of universal moral precepts, and much else of the kind. All of these things, the left identitarians said, were products of repressive institutions, hierarchies, and power.
Today’s “social-justice warriors” are heirs to this sordid intellectual legacy. They claim to seek justice. But, unmoored from any moral foundations, SJW justice operates like mob justice and revolutionary terror, usually carried out online. SJWs claim to protect individual autonomy, but the obsession with group identity and power dynamics means that SJW autonomy claims must destroy the autonomy of others. Self-righteousness married to total relativism is a terrifying thing.
It isn’t enough to have legalized same-sex marriage in the U.S. via judicial fiat; the evangelical baker must be forced to bake cakes for gay weddings. It isn’t enough to have won legal protection and social acceptance for the transgendered; the Orthodox rabbi must use preferred trans pronouns on pain of criminal prosecution. Likewise, since there is no objective truth to be gained from the open exchange of ideas, any speech that causes subjective discomfort among members of marginalized groups must be suppressed, if necessary through physical violence. Campus censorship that began with speech codes and mobs that prevented conservative and pro-Israel figures from speaking has now evolved into a general right to beat anyone designated as a “fascist,” on- or off-campus.
For the illiberal left, the election of Donald Trump was indisputable proof that behind America’s liberal pieties lurks, forever, the beast of bigotry. Trump, in this view, wasn’t just an unqualified vulgarian who nevertheless won the decisive backing of voters dissatisfied with the alternative or alienated from mainstream politics. Rather, a vote for Trump constituted a declaration of war against women, immigrants, and other victims of American “structures of oppression.” There would be no attempt to persuade Trump supporters; war would be answered by war.
This isn’t liberalism. Since it can sometimes appear as an extension of traditional civil-rights activism, however, identity leftism has glommed itself onto liberalism. It is frequently impossible to tell where traditional autonomy- and equality-seeking liberalism ends and repressive identity leftism begins. Whether based on faulty thinking or out of a sense of weakness before an angry and energetic movement, liberals have too often embraced the identity left as their own. They haven’t noticed how the identitarians seek to undermine, not rectify, liberal order.
Some on the left, notably Columbia University’s Mark Lilla, are sounding the alarm and calling on Democrats to stress the common good over tribalism. Yet these are a few voices in the wilderness. Identitarians of various stripes still lord over the broad left, where it is fashionable to believe that the U.S. project is predatory and oppressive by design. If there is a viable left alternative to identity on the horizon, it is the one offered by Sanders and his “Bernie Bros”—which is to say, a reversion to the socialism and class struggle of the previous century.
Americans, it seems, will have to wait a while for reason and responsibility to return to the left.T
hen there is the illiberal fever gripping American conservatives. Liberal democracy has always had its critics on the right, particularly in Continental Europe, where statist, authoritarian, and blood-and-soil accounts of conservatism predominate. Mainstream Anglo-American conservatism took a different course. It has championed individual rights, free enterprise, and pluralism while insisting that liberty depends on public virtue and moral order, and that sometimes the claims of liberty and autonomy must give way to those of tradition, state authority, and the common good.
The whole beauty of American order lies in keeping in tension these rival forces that are nevertheless fundamentally at peace. The Founders didn’t adopt wholesale Enlightenment liberalism; rather, they tempered its precepts about universal rights with the teachings of biblical religion as well as Roman political theory. The Constitution drew from all three wellsprings. The product was a whole, and it is a pointless and ahistorical exercise to elevate any one source above the others.
American conservatism and liberalism, then, are in fact branches of each other, the one (conservatism) invoking tradition and virtue to defend and, when necessary, discipline the regime of liberty; the other (liberalism) guaranteeing the open space in which churches, volunteer organizations, philanthropic activity, and other sources of tradition and civic virtue flourish, in freedom, rather than through state establishment or patronage.
One result has been long-term political stability, a blessing that Americans take for granted. Another has been the transformation of liberalism into the lingua franca of all politics, not just at home but across a world that, since 1945, has increasingly reflected U.S. preferences. The great French classical liberal Raymond Aron noted in 1955 that the “essentials of liberalism—the respect for individual liberty and moderate government—are no longer the property of a single party: they have become the property of all.” As Aron archly pointed out, even liberalism’s enemies tend to frame their objections using the rights-based talk associated with liberalism.
Under Trump, however, some in the party of the right have abdicated their responsibility to liberal democracy as a whole. They have reduced themselves to the lowest sophistry in defense of the New Yorker’s inanities and daily assaults on presidential norms. Beginning when Trump clinched the GOP nomination last year, a great deal of conservative “thinking” has amounted to: You did X to us, now enjoy it as we dish it back to you and then some. Entire websites and some of the biggest stars in right-wing punditry are singularly devoted to making this rather base point. If Trump is undermining this or that aspect of liberal order that was once cherished by conservatives, so be it; that 63 million Americans supported him and that the president “drives the left crazy”—these are good enough reasons to go along.
Some of this is partisan jousting that occurs with every administration. But when it comes to Trump’s most egregious statements and conduct—such as his repeated assertions that the U.S. and Putin’s thugocracy are moral equals—the apologetics are positively obscene. Enough pooh-poohing, whataboutery, and misdirection of this kind, and there will be no conservative principle left standing.
More perniciously, as once-defeated illiberal philosophies have returned with a vengeance to the left, so have their reactionary analogues to the right. The two illiberalisms enjoy a remarkable complementarity and even cross-pollinate each other. This has developed to the point where it is sometimes hard to distinguish Tucker Carlson from Chomsky, Laura Ingraham from Julian Assange, the Claremont Review from New Left Review, and so on.
Two slanders against liberalism in particular seem to be gathering strength on the thinking right. The first is the tendency to frame elements of liberal democracy, especially free trade, as a conspiracy hatched by capitalists, the managerial class, and others with soft hands against American workers. One needn’t renounce liberal democracy as a whole to believe this, though believers often go the whole hog. The second idea is that liberalism itself was another form of totalitarianism all along and, therefore, that no amount of conservative course correction can set right what is wrong with the system.
These two theses together represent a dismaying ideological turn on the right. The first—the account of global capitalism as an imposition of power over the powerless—has gained currency in the pages of American Affairs, the new journal of Trumpian thought, where class struggle is a constant theme. Other conservatives, who were always skeptical of free enterprise and U.S.-led world order, such as the Weekly Standard’s Christopher Caldwell, are also publishing similar ideas to a wider reception than perhaps greeted them in the past.
In a March 2017 essay in the Claremont Review of Books, for example, Caldwell flatly described globalization as a “con game.” The perpetrators, he argued, are “unscrupulous actors who have broken promises and seized a good deal of hard-won public property.” These included administrations of both parties that pursued trade liberalization over decades, people who live in cities and therefore benefit from the knowledge-based economy, American firms, and really anyone who has ever thought to capitalize on global supply chains to boost competitiveness—globalists, in a word.
By shipping jobs and manufacturing processes overseas, Caldwell contended, these miscreants had stolen not just material things like taxpayer-funded research but also concepts like “economies of scale” (you didn’t build that!). Thus, globalization in the West differed “in degree but not in kind from the contemporaneous Eastern Bloc looting of state assets.”
That comparison with predatory post-Communist privatization is a sure sign of ideological overheating. It is somewhat like saying that a consumer bank’s lending to home buyers differs in degree but not in kind from a loan shark’s racket in a housing project. Well, yes, in the sense that the underlying activity—moneylending, the purchase of assets—is the same in both cases. But the context makes all the difference: The globalization that began after World War II and accelerated in the ’90s took place within a rules-based system, which duly elected or appointed policymakers in Western democracies designed in good faith and for a whole host of legitimate strategic and economic reasons.
These policymakers knew that globalization was as old as civilization itself. It would take place anyway, and the only question was whether it would be rules-based and efficient or the kind of globalization that would be driven by great-power rivalry and therefore prone to protectionist trade wars. And they were right. What today’s anti-trade types won’t admit is that defeating the Trans-Pacific Partnership and a proposed U.S.-European trade pact known as TTIP won’t end globalization as such; instead, it will cede the game to other powers that are less concerned about rules and fair play.
The postwar globalizers may have gone too far (or not far enough!). They certainly didn’t give sufficient thought to the losers in the system, or how to deal with the de-industrialization that would follow when information became supremely mobile and wages in the West remained too high relative to skills and productivity gains in the developing world. They muddled and compromised their way through these questions, as all policymakers in the real world do.
The point is that these leaders—the likes of FDR, Churchill, JFK, Ronald Reagan, Margaret Thatcher, and, yes, Bill Clinton—acted neither with malice aforethought nor anti-democratically. It isn’t true, contra Caldwell, that free trade necessarily requires “veto-proof and non-consultative” politics. The U.S., Britain, and other members of what used to be called the Free World have respected popular sovereignty (as understood at the time) for as long as they have been trading nations. Put another way, you were far more likely to enjoy political freedom if you were a citizen of one of these states than of countries that opposed economic liberalism in the 20th century. That remains true today. These distinctions matter.
Caldwell and like-minded writers of the right, who tend to dwell on liberal democracies’ crimes, are prepared to tolerate far worse if it is committed in the name of defeating “globalism.” Hence the speech on Putin that Caldwell delivered this spring at a Hillsdale College gathering in Phoenix. Promising not to “talk about what to think about Putin,” he proceeded to praise the Russian strongman as the “preeminent statesman of our time” (alongside Turkish strongman Recep Tayyip Erdogan). Putin, Caldwell said, “has become a symbol of national self-determination.”
Then Caldwell made a remark that illuminates the link between the illiberalisms of yesterday and today. Putin is to “populist conservatives,” he declared, what Castro once was to progressives. “You didn’t have to be a Communist to appreciate the way Castro, whatever his excesses, was carving out a space of autonomy for his country.”
Whatever his excesses, indeed.T
he other big idea is that today’s liberal crises aren’t a bug but a core feature of liberalism. This line of thinking is particularly prevalent among some Catholic traditionalists and other orthodox Christians (both small- and capital-“o”). The common denominator, it seems to me, is having grown up as a serious believer at a time when many liberals—to their shame—have declared war on faith generally and social conservatism in particular.
The argument essentially is this:
We (social conservatives, traditionalists) saw the threat from liberalism coming. With its claims about abstract rights and universal reason, classical liberalism had always posed a danger to the Church and to people of God. We remembered what those fired up by the new ideas did to our nuns and altars in France. Still we made peace with American liberal order, because we were told that the Founders had “built on low but solid ground,” to borrow Leo Strauss’s famous formulation, or that they had “built better than they knew,” as American Catholic hierarchs in the 19th century put it.
Maybe these promises held good for a couple of centuries, the argument continues, but they no longer do. Witness the second sexual revolution under way today. The revolutionaries are plainly telling us that we must either conform our beliefs to Herod’s ways or be driven from the democratic public square. Can it still be said that the Founding rested on solid ground? Did the Founders really build better than they knew? Or is what is passing now precisely what they intended, the rotten fruit of the Enlightenment universalism that they planted in the Constitution? We don’t love Trump (or Putin, Hungary’s Viktor Orbán, etc.), but perhaps he can counter the pincer movement of sexual and economic liberalism, and restore a measure of solidarity and commitment to the Western project.
The most pessimistic of these illiberal critics go so far as to argue that liberalism isn’t all that different from Communism, that both are totalitarian children of the Enlightenment. One such critic, Harvard Law School’s Adrian Vermeule, summed up this position in a January essay in First Things magazine:
The stock distinction between the Enlightenment’s twins—communism is violently coercive while liberalism allows freedom of thought—is glib. Illiberal citizens, trapped [under liberalism] without exit papers, suffer a narrowing sphere of permitted action and speech, shrinking prospects, and increasing pressure from regulators, employers, and acquaintances, and even from friends and family. Liberal society celebrates toleration, diversity, and free inquiry, but in practice it features a spreading social, cultural, and ideological conformism.1
I share Vermeule’s despair and that of many other conservative-Christian friends, because there have been genuinely alarming encroachments against conscience, religious freedom, and the dignity of life in Western liberal democracies in recent years. Even so, despair is an unhelpful companion to sober political thought, and the case for plunging into political illiberalism is weak, even on social-conservative grounds.
Here again what commends liberalism is historical experience, not abstract theory. Simply put, in the real-world experience of the 20th century, the Church, tradition, and religious minorities fared far better under liberal-democratic regimes than they did under illiberal alternatives. Are coercion and conformity targeting people of faith under liberalism? To be sure. But these don’t take the form of the gulag or the concentration camp or the soccer stadium–cum-killing field. Catholic political practice knows well how to draw such moral distinctions between regimes: Pope John Paul II befriended Reagan. If liberal democracy and Communism were indeed “twins” whose distinctions are “glib,” why did he do so?
And as Pascal Bruckner wrote in his essay “The Tyranny of Guilt,” if liberal democracy does trap or jail you (politically speaking), it also invariably slips the key under your cell door. The Swedish midwives driven out of the profession over their pro-life views can take their story to the media. The Down syndrome advocacy outfit whose anti-eugenic advertising was censored in France can sue in national and then international courts. The Little Sisters of the Poor can appeal to the Supreme Court for a conscience exemption to Obamacare’s contraceptives mandate. And so on.
Conversely, once you go illiberal, you don’t just rid yourself of the NGOs and doctrinaire bureaucrats bent on forcing priests to perform gay marriages; you also lose the legal guarantees that protect the Church, however imperfectly, against capricious rulers and popular majorities. And if public opinion in the West is turning increasingly secular, indeed anti-Christian, as social conservatives complain and surveys seem to confirm, is it really a good idea to militate in favor of a more illiberal order rather than defend tooth and nail liberal principles of freedom of conscience? For tomorrow, the state might fall into Elizabeth Warren’s hands.
Nor, finally, is political liberalism alone to blame for the Church’s retreating on various fronts. There have been plenty of wounds inflicted by churchmen and laypeople, who believed that they could best serve the faith by conforming its liturgy, moral teaching, and public presence to liberal order. But political liberalism didn’t compel these changes, at least not directly. In the space opened up by liberalism, and amid the kaleidoscopic lifestyles that left millions of people feeling empty and confused, it was perfectly possible to propose tradition as an alternative. It is still possible to do so.N one of this is to excuse the failures of liberals. Liberals and mainstream conservatives must go back to the drawing board, to figure out why it is that thoughtful people have come to conclude that their system is incompatible with democracy, nationalism, and religious faith. Traditionalists and others who see Russia’s mafia state as a defender of Christian civilization and national sovereignty have been duped, but liberals bear some blame for driving large numbers of people in the West to that conclusion.
This is a generational challenge for the liberal project. So be it. Liberal societies like America’s by nature invite such questioning. But before we abandon the 200-and-some-year-old liberal adventure, it is worth examining the ways in which today’s left-wing and right-wing critiques of it mirror bad ideas that were overcome in the previous century. The ideological ferment of the moment, after all, doesn’t relieve the illiberals of the responsibility to reckon with the lessons of the past.
1 Vermeule was reviewing The Demon in Democracy, a 2015 book by the Polish political theorist and parliamentarian Ryszard Legutko that makes the same case. Fred Siegel’s review of the English edition appeared in our June 2016 issue.
How the courts are intervening to block some of the most unjust punishments of our time
Barrett’s decision marked the 59th judicial setback for a college or university since 2013 in a due-process lawsuit brought by a student accused of sexual assault. (In four additional cases, the school settled a lawsuit before any judicial decision occurred.) This body of law serves as a towering rebuke to the Obama administration’s reinterpretation of Title IX, the 1972 law barring sex discrimination in schools that receive federal funding.
Beginning in 2011, the Education Department’s Office for Civil Rights (OCR) issued a series of “guidance” documents pressuring colleges and universities to change how they adjudicated sexual-assault cases in ways that increased the likelihood of guilty findings. Amid pressure from student and faculty activists, virtually all elite colleges and universities have gone far beyond federal mandates and have even further weakened the rights of students accused of sexual assault.
Like all extreme victims’-rights approaches, the new policies had the greatest impact on the wrongly accused. A 2016 study from UCLA public-policy professor John Villasenor used just one of the changes—schools employing the lowest standard of proof, a preponderance of the evidence—to predict that as often as 33 percent of the time, campus Title IX tribunals would return guilty findings in cases involving innocent students. Villasenor’s study could not measure the impact of other Obama-era policy demands—such as allowing accusers to appeal not-guilty findings, discouraging cross-examination of accusers, and urging schools to adjudicate claims even when a criminal inquiry found no wrongdoing.
In a September 7 address at George Mason University, Education Secretary Betsy DeVos stated that “no student should be forced to sue their way to due process.” But once enmeshed in the campus Title IX process, a wrongfully accused student’s best chance for justice may well be a lawsuit filed after his college incorrectly has found him guilty. (According to data from United Educators, a higher-education insurance firm, 99 percent of students accused of campus sexual assault are male.) The Foundation for Individual Rights has identified more than 180 such lawsuits filed since the 2011 policy changes. That figure, obviously, excludes students with equally strong claims whose families cannot afford to go to court. These students face life-altering consequences. As Judge T.S. Ellis III noted in a 2016 decision, it is “so clear as to be almost a truism” that a student will lose future educational and employment opportunities if his college wrongly brands him a rapist.
“It is not the role of the federal courts to set aside decisions of school administrators which the court may view as lacking in wisdom or compassion.” So wrote the Supreme Court in a 1975 case, Wood v. Strickland. While the Supreme Court has made clear that colleges must provide accused students with some rights, especially when dealing with nonacademic disciplinary questions, courts generally have not been eager to intervene in such matters.
This is what makes the developments of the last four years all the more remarkable. The process began in May 2013, in a ruling against St. Joseph’s University, and has lately accelerated (15 rulings in 2016 and 21 thus far in 2017). Of the 40 setbacks for colleges in federal court, 14 came from judges nominated by Barack Obama, 11 from Clinton nominees, and nine from selections of George W. Bush. Brown University has been on the losing side of three decisions; Duke, Cornell, and Penn State, two each.
Court decisions since the expansion of Title IX activism have not all gone in one direction. In 36 of the due-process lawsuits, courts have permitted the university to maintain its guilty finding. (In four other cases, the university settled despite prevailing at a preliminary stage.) But even in these cases, some courts have expressed discomfort with campus procedures. One federal judge was “greatly troubled” that Georgia Tech veered “very far from an ideal representation of due process” when its investigator “did not pursue any line of investigation that may have cast doubt on [the accuser’s] account of the incident.” Another went out of his way to say that he considered it plausible that a former Case Western Reserve University student was actually “innocent of the charges levied against him.” And one state appellate judge opened oral argument by bluntly informing the University of California’s lawyer, “When I . . . finished reading all the briefs in this case, my comment was, ‘Where’s the kangaroo?’”
Judges have, obviously, raised more questions in cases where the college has found itself on the losing side. Those lawsuits have featured three common areas of concern: bias in the investigation, resulting in a college decision based on incomplete evidence; procedures that prevented the accused student from challenging his accuser’s credibility, chiefly through cross-examination; and schools utilizing a process that seemed designed to produce a predetermined result, in response to real or perceived pressure from the federal government.C olleges and universities have proven remarkably willing to act on incomplete information when adjudicating sexual-assault cases. In December 2013, for example, Amherst College expelled a student for sexual assault despite text messages (which the college investigator failed to discover) indicating that the accuser had consented to sexual contact. The accuser’s own testimony also indicated that she might have committed sexual assault, by initiating sexual contact with a student who Amherst conceded was experiencing an alcoholic blackout. When the accused student sued Amherst, the college said its failure to uncover the text messages had been irrelevant because its investigator had only sought texts that portrayed the incident as nonconsensual. In February, Judge Mark Mastroianni allowed the accused student’s lawsuit to proceed, commenting that the texts could raise “additional questions about the credibility of the version of events [the accuser] gave during the disciplinary proceeding.” The two sides settled in late July.
Amherst was hardly alone in its eagerness to avoid evidence that might undermine the accuser’s version of events; the same happened at Penn State, St. Joseph’s, Duke, Ohio State, Occidental, Lynn, Marlboro, Michigan, and Notre Dame.
Even in cases with a more complete evidentiary base, accused students have often been blocked from presenting a full-fledged defense. As part of its reinterpretation of Title IX, the Obama administration sought to shield campus accusers from cross-examination. OCR’s 2011 guidance “strongly” discouraged direct cross-examination of accusers by the accused student—a critical restriction, since most university procedures require the accused student, rather than his lawyer, to defend himself in the hearing. OCR’s 2014 guidance suggested that this type of cross-examination in and of itself could create a hostile environment. The Obama administration even spoke favorably about the growing trend among schools to abolish hearings altogether and allow a single official to serve as investigator, prosecutor, judge, and jury in sexual-assault cases.
The Supreme Court has never held that campus disciplinary hearings must permit cross-examination. Nonetheless, the recent attack on the practice has left schools struggling to explain why they would not want to utilize what the Court has described as the “greatest legal engine ever invented for the discovery of truth.” In June 2016, the University of Cincinnati found a student guilty of sexual assault after a hearing at which neither his accuser nor the university’s Title IX investigator appeared. In an unintentionally comical line, the hearing chair noted the absent witnesses before asking the accused student if he had “any questions of the Title IX report.” The student, befuddled, replied, “Well, since she’s not here, I can’t really ask anything of the report.” (The panel chair did not indicate how the “report” could have answered any questions.) Cincinnati found the student guilty anyway.1
Limitations on full cross-examination also played a role in judicial setbacks for Middlebury, George Mason, James Madison, Ohio State, Occidental, Penn State, Brandeis, Amherst, Notre Dame, and Skidmore.
Finally, since 2011, more than 300 students have filed Title IX complaints with the Office for Civil Rights, alleging mishandling of their sexual-assault allegation by their college. OCR’s leadership seemed to welcome the complaints, which allowed Obama officials not only to inspect the individual case but all sexual-assault claims at the school in question over a three-year period. Northwestern University professor Laura Kipnis has estimated that during the Obama years, colleges spent between $60 million and $100 million on these investigations. If OCR finds a Title IX violation, that might lead to a loss of federal funding. This has led Harvard Law professors Jeannie Suk Gersen, Janet Halley, Elizabeth Bartholet, and Nancy Gertner to observe in a white paper submitted to OCR that universities have “strong incentives to ensure the school stays in OCR’s good graces.”
One of the earliest lawsuits after the Obama administration’s policy shift, involving former Xavier University basketball player Dez Wells, demonstrated how an OCR investigation can affect the fairness of a university inquiry. The accuser’s complaint had been referred both to Xavier’s Title IX office and the Cincinnati police. The police concluded that the allegation was meritless; Hamilton County Prosecuting Attorney Joseph Deters later said he considered charging the accuser with filing a false police report.
Deters asked Xavier to delay its proceedings until his office completed its investigation. School officials refused. Instead, three weeks after the initial allegation, the university expelled Wells. He sued and speculated that Xavier’s haste came not from a quest for justice but instead from a desire to avoid difficulties in finalizing an agreement with OCR to resolve an unrelated complaint filed by two female Xavier students. (In recent years, OCR has entered into dozens of similar resolution agreements, which bind universities to policy changes in exchange for removing the threat of losing federal funds.) In a July 2014 ruling, Judge Arthur Spiegel observed that Xavier’s disciplinary tribunal, however “well-equipped to adjudicate questions of cheating, may have been in over its head with relation to an alleged false accusation of sexual assault.” Soon thereafter, the two sides settled; Wells transferred to the University of Maryland.
Ohio State, Occidental, Cornell, Middlebury, Appalachian State, USC, and Columbia have all found themselves on the losing side of court decisions arising from cases that originated during a time in which OCR was investigating or threatening to investigate the school. (In the Ohio State case, one university staffer testified that she didn’t know whether she had an obligation to correct a false statement by an accuser to a disciplinary panel.) Pressure from OCR can be indirect, as well. The Obama administration interpreted federal law as requiring all universities to have at least one Title IX coordinator; larger universities now employ dozens of Title IX personnel who, as the Harvard Law professors explained, “have reason to fear for their jobs if they hold a student not responsible or if they assign a rehabilitative or restorative rather than a harshly punitive sanction.”A mid the wave of judicial setbacks for universities, two decisions in particular stand out. Easily the most powerful opinion in a campus due-process case came in March 2016 from Judge F. Dennis Saylor. While the stereotypical campus sexual-assault allegation results from an alcohol-filled, one-night encounter between a male and a female student, a case at Brandeis University involved a long-term monogamous relationship between two male students. A bad breakup led to the accusing student’s filing the following complaint, against which his former boyfriend was expected to provide a defense: “Starting in the month of September, 2011, the Alleged violator of Policy had numerous inappropriate, nonconsensual sexual interactions with me. These interactions continued to occur until around May 2013.”
To adjudicate, Brandeis hired a former OCR staffer, who interviewed the two students and a few of their friends. Since the university did not hold a hearing, the investigator decided guilt or innocence on her own. She treated each incident as if the two men were strangers to each other, which allowed her to determine that sexual “violence” had occurred in the relationship. The accused student, she found, sometimes looked at his boyfriend in the nude without permission and sometimes awakened his boyfriend with kisses when the boyfriend wanted to stay asleep. The university’s procedures prevented the student from seeing the investigator’s report, with its absurdly broad definition of sexual misconduct, in preparing his appeal. “In the context of American legal culture,” Boston Globe columnist Dante Ramos later argued, denying this type of information “is crazy.” “Standard rules of evidence and other protections for the accused keep things like false accusations or mistakes by authorities from hurting innocent people.” When the university appeal was denied, the student sued.
At an October 2015 hearing to consider the university’s motion to dismiss, Saylor seemed flabbergasted at the unfairness of the school’s approach. “I don’t understand,” he observed, “how a university, much less one named after Louis Brandeis, could possibly think that that was a fair procedure to not allow the accused to see the accusation.” Brandeis’s lawyer cited pressure to conform to OCR guidance, but the judge deemed the university’s procedures “closer to Salem 1692 than Boston, 2015.”
The following March, Saylor issued an 89-page opinion that has been cited in virtually every lawsuit subsequently filed by an accused student. “Whether someone is a ‘victim’ is a conclusion to be reached at the end of a fair process, not an assumption to be made at the beginning,” Saylor wrote. “If a college student is to be marked for life as a sexual predator, it is reasonable to require that he be provided a fair opportunity to defend himself and an impartial arbiter to make that decision.” Saylor concluded that Brandeis forced the accused student “to defend himself in what was essentially an inquisitorial proceeding that plausibly failed to provide him with a fair and reasonable opportunity to be informed of the charges and to present an adequate defense.”
The student, vindicated by the ruling’s sweeping nature, then withdrew his lawsuit. He currently is pursuing a Title IX complaint against Brandeis with OCR.
Four months later, a three-judge panel of the Second Circuit Court of Appeals produced an opinion that lacked Saylor’s rhetorical flourish or his understanding of the basic unfairness of the campus Title IX process. But by creating a more relaxed standard for accused students to make federal Title IX claims, the Second Circuit’s decision in Doe v. Columbia carried considerable weight.
Two Columbia students who had been drinking had a brief sexual encounter at a party. More than four months later, the accuser claimed she was too intoxicated to have consented. Her allegation came in an atmosphere of campus outrage about the university’s allegedly insufficient toughness on sexual assault. In this setting, the accused student found Columbia’s Title IX investigator uninterested in hearing his side of the story. He cited witnesses who would corroborate his belief that the accuser wasn’t intoxicated; the investigator declined to speak with them. The student was found guilty, although for reasons differing from the initial claim; the Columbia panel ruled that he had “directed unreasonable pressure for sexual activity toward the [accuser] over a period of weeks,” leaving her unable to consent on the night in question. He received a three-semester suspension for this nebulous offense—which even his accuser deemed too harsh. He sued, and the case was assigned to Judge Jesse Furman.
Furman’s opinion provided a ringing victory for Columbia and the Obama-backed policies it used. As Title IX litigator Patricia Hamill later observed, Furman’s “almost impossible standard” required accused students to have inside information about the institution’s handling of other sexual-assault claims—information they could plausibly obtain only through the legal process known as discovery, which happens at a later stage of litigation—in order to survive a university’s initial motion to dismiss. Furman suggested that, to prevail, an accused student would need to show that his school treated a female student accused of sexual assault more favorably, or at least provide details about how cases against other accused students showed a pattern of bias. But federal privacy law keeps campus disciplinary hearings private, leaving most accused students with little opportunity to uncover the information before their case is dismissed.
At the same time, the opinion excused virtually any degree of unfairness by the institution. Furman reasoned that taking “allegations of rape on campus seriously and . . . treat[ing] complainants with a high degree of sensitivity” could constitute “lawful” reasons for university unfairness toward accused students. Samantha Harris of the Foundation for Individual Rights in Education detected the decision’s “immediate and nationwide impact” in several rulings against accused students. It also played the same role in university briefs that Saylor’s Brandeis opinion did in filings by accused students.
The Columbia student’s lawyer, Andrew Miltenberg, appealed Furman’s ruling to the Second Circuit. The stakes were high, since a ruling affirming the lower court’s reasoning would have all but foreclosed Title IX lawsuits by accused students in New York, Connecticut, and Vermont. But a panel of three judges, all nominated by Democratic presidents, overturned Furman’s decision. In the opinion’s crucial passage, Judge Pierre Leval held that a university “is not excused from liability for discrimination because the discriminatory motivation does not result from a discriminatory heart, but rather from a desire to avoid practical disadvantages that might result from unbiased action. A covered university that adopts, even temporarily, a policy of bias favoring one sex over the other in a disciplinary dispute, doing so in order to avoid liability or bad publicity, has practiced sex discrimination, notwithstanding that the motive for the discrimination did not come from ingrained or permanent bias against that particular sex.” Before the Columbia decision, courts almost always had rebuffed Title IX pleadings from accused students. More recently, judges have allowed Title IX claims to proceed against Amherst, Cornell, California–Santa Barbara, Drake, and Rollins.
After the Second Circuit’s decision, Columbia settled with the accused student, sparing its Title IX decision-makers from having to testify at a trial. James Madison was one of the few universities to take a different course, with disastrous results. A lawsuit from an accused student survived a motion to dismiss, but the university refused to settle, allowing the student’s lawyer to depose the three school employees who had decided his client’s fate. One unintentionally revealed that he had misapplied the university’s own definition of consent. Another cited the importance of the accuser’s slurring words on a voicemail, thus proving her extreme intoxication on the night of the alleged assault. It was left to the accused student’s lawyer, at a deposition months after the decision had been made, to note that the voicemail in question actually was received on a different night. In December 2016, Judge Elizabeth Dillon, an Obama nominee, granted summary judgment to the accused student, concluding that “significant anomalies in the appeal process” violated his due-process rights under the Constitution.niversities were on the losing side of 36 due-process rulings when Obama appointee Catherine Lhamon was presiding over the Office for Civil Rights between 2013 and 2016; no record exists of her publicly acknowledging any of them. In June 2017, however, Lhamon suddenly rejoiced that “yet another federal court” had found that students disciplined for sexual misconduct “were not denied due process.” That Fifth Circuit decision, involving two former students at the University of Houston, was an odd case for her to celebrate. The majority cabined its findings to the “unique facts” of the case—that the accused students likely would have been found guilty even under the fairest possible process. And the dissent, from Judge Edith Jones, denounced the procedures championed by Lhamon and other Obama officials as “heavily weighted in favor of finding guilt,” predicting “worse to come if appellate courts do not step in to protect students’ procedural due process right where allegations of quasi-criminal sexual misconduct arise.”
At this stage, Lhamon, who now chairs the U.S. Commission on Civil Rights, cannot be taken seriously when it comes to questions of campus due process. But other defenders of the current Title IX regime have offered more substantive commentary about the university setbacks.
Legal scholar Michelle Anderson was one of the few to even discuss the due-process decisions. “Colleges and universities do not always adjudicate allegations of sexual assault well,” she noted in a 2016 law review article defending the Obama-era policies. Anderson even conceded that some colleges had denied “accused students fairness in disciplinary adjudication.” But these students sued, “and campuses are responding—as they must—when accused students prevail. So campuses face powerful legal incentives on both sides to address campus sexual assault, and to do so fairly and impartially.”
This may be true, but Anderson does not explain why wrongly accused students should bear the financial and emotional burden of inducing their colleges to implement fair procedures. More important, scant evidence exists that colleges have responded to the court victories of wrongly accused students by creating fairer procedures. Some have even made it more difficult for wrongly accused students to sue. After losing a lawsuit in December 2014, Brown eliminated the right of students accused of sexual assault to have “every opportunity” to present evidence. That same year, an accused student showed how Swarthmore had deviated from its own procedures in his case. The college quickly settled the lawsuit—and then added a clause to its procedures immunizing it from similar claims in the future. Swarthmore currently informs accused students that “rules of evidence ordinarily found in legal proceedings shall not be applied, nor shall any deviations from any of these prescribed procedures alone invalidate a decision.”
Many lawsuits are still working their way through the judicial system; three cases are pending at federal appellate courts. Of the two that address substantive matters, oral arguments seemed to reveal skepticism of the university’s position. On July 26, a three-judge panel of the First Circuit considered a case at Boston College, where the accused student plausibly argued that someone else had committed the sexual assault (which occurred on a poorly lit dance floor). Judges Bruce Selya and William Kayatta seemed troubled that a Boston College dean had improperly intruded on the hearing board’s deliberations. At the Sixth Circuit a few days later, Judges Richard Griffin and Amul Thapar both expressed concerns about the University of Cincinnati’s downplaying the importance of cross-examination in campus-sex adjudications. Judge Eric Clay was quieter, but he wondered about the tension between the university’s Title IX and truth-seeking obligations.
In a perfect world, academic leaders themselves would have created fairer processes without judicial intervention. But in the current campus environment, such an approach is impossible. So, at least for the short term, the courts remain the best, albeit imperfect, option for students wrongly accused of sexual assault. Meanwhile, every year, young men entrust themselves and their family’s money to institutions of higher learning that are indifferent to their rights and unconcerned with the injustices to which these students might be subjected.
1 After a district court placed that finding on hold, the university appealed to the Sixth Circuit.
Review of 'Terror in France' By Gilles Kepel
Kepel is particularly knowledgeable about the history and process of radicalization that takes place in his nation’s heavily Muslim banlieues (the depressed housing projects ringing Paris and other major cities), and Terror in France is informed by decades of fieldwork in these volatile locales. What we have been witnessing for more than a decade, Kepel argues, is the “third wave” of global jihadism, which is not so much a top-down doctrinally inspired campaign (as were the 9/11 attacks, directed from afar by the oracular figure of Osama bin Laden) but a bottom-up insurgency with an “enclave-based ethnic-racial logic of violence” to it. Kepel traces the phenomenon back to 2005, a convulsive year that saw the second-generation descendants of France’s postcolonial Muslim immigrants confront a changing socio-political landscape.
That was the year of the greatest riots in modern French history, involving mostly young Muslim men. It was also the year that Abu Musab al-Suri, the Syrian-born Islamist then serving as al-Qaeda’s operations chief in Europe, published The Global Islamic Resistance Call. This 1,600-page manifesto combined pious imprecations against the West with do-it-yourself ingenuity, an Anarchist’s Cookbook for the Islamist set. In Kepel’s words, the manifesto preached a “jihadism of proximity,” the brand of civil war later adopted by the Islamic State. It called for ceaseless, mass-casualty attacks in Western cities—attacks which increase suspicion and regulation of Muslims and, in turn, drive those Muslims into the arms of violent extremists.
The third-generation jihad has been assisted by two phenomena: social-networking sites that easily and widely disseminate Islamist propaganda (thus increasing the rate of self-radicalization) and the so-called Arab Spring, which led to state collapse in Syria and Libya, providing “an exceptional site for military training and propaganda only a few hours’ flight from Europe, and at a very low cost.”
Kepel’s book is not just a study of the ideology and tactics of Islamists but a sociopolitical overview of how this disturbing phenomenon fits within a country on the brink. For example, Kepel finds that jihadism is emerging in conjunction with developments such as the “end of industrial society.” A downturn in work has led to an ominous situation in which a “right-wing ethnic nationalism” preying on the economically anxious has risen alongside Islamism as “parallel conduits for expressing grievances.” Filling a space left by the French Communist Party (which once brought the ethnic French working class and Arab immigrants together), these two extremes leer at each other from opposite sides of a societal chasm, signaling the potentially cataclysmic future that awaits France if both mass unemployment and Islamist terror continue undiminished.
The French economy has also had a more direct inciting effect on jihadism. Overregulated labor markets make it difficult for young Muslims to get jobs, thus exacerbating the conditions of social deprivation and exclusion that make individuals susceptible to radicalization. The inability to tackle chronic unemployment has led to widespread Muslim disillusionment with the left (a disillusionment aggravated by another, often glossed over, factor: widespread Muslim opposition to the Socialist Party’s championing of same-sex marriage). Essentially, one left-wing constituency (unions) has made the unemployment of another constituency (Muslim youth) the mechanism for maintaining its privileges.
Kepel does not, however, cite deprivation as the sole or even main contributing factor to Islamist radicalization. One Parisian banlieue that has sent more than 80 residents to fight in Syria, he notes, has “attractive new apartment buildings” built by the state and features a mosque “constructed with the backing of the Socialist mayor.” It is also the birthplace of well-known French movie stars of Arab descent, and thus hardly a place where ambition goes to die. “The Islamophobia mantra and the victim mentality it reinforces makes it possible to rationalize a total rejection of France and a commitment to jihad by making a connection between unemployment, discrimination, and French republican values,” Kepel writes. Indeed, Kepel is refreshingly derisive of the term “Islamophobia” throughout the book, excoriating Islamists and their fellow travelers for “substituting it for anti-Semitism as the West’s cardinal sin.” These are meaningful words coming from Kepel, a deeply learned scholar of Islam who harbors great respect for the faith and its adherents.
Kepel also weaves the saga of jihadism into the ongoing “kulturkampf within the French left.” Arguments about Islamist terrorism demonstrate a “divorce between a secular progressive tradition” and the children of the Muslim immigrants this tradition fought to defend. The most ironically perverse manifestation of this divorce was ISIS’s kidnapping of Didier François, co-founder of the civil-rights organization SOS Racisme. Kepel recognizes the origins of this divorce in the “red-green” alliance formed decades ago between Islamists and elements of the French intellectual left, such as Michel Foucault, a cheerleader of the Iranian revolution.
Though he offers a rigorous history and analysis of the jihadist problem, Kepel is generally at a loss for solutions. He decries a complacent French elite, with its disregard for genuine expertise (evidenced by the decline in institutional academic support for Islamicists and Arabists) and the narrow, relatively impenetrable way in which it perpetuates itself, chiefly with a single school (the École normale supérieure) that practically every French politician must attend. Despite France’s admirable republican values, this has made the process of assimilation rather difficult. But other than wishing that the public education system become more effective and inclusive at instilling republican values, Kepel provides little in the way of suggestions as to how France emerges from this mess. That a scholar of such erudition and humanity can do little but throw up his hands and issue a sigh of despair cannot bode well. The third-generation jihad owes as much to the political breakdown in France as it does to the meltdown in the Middle East. Defeating this two-headed beast requires a new and comprehensive playbook: the West’s answer to The Global Islamic Resistance Call. That book has yet to be written.
resident Trump, in case you haven’t noticed, has a tendency to exaggerate. Nothing is “just right” or “meh” for him. Buildings, crowds, election results, and military campaigns are always outsized, gargantuan, larger, and more significant than you might otherwise assume. “People want to believe that something is the biggest and the greatest and the most spectacular,” he wrote 30 years ago in The Art of the Deal. “I call it truthful hyperbole. It’s an innocent form of exaggeration—and a very effective form of promotion.”
So effective, in fact, that the press has picked up the habit. Reporters and editors agree with the president that nothing he does is ordinary. After covering Trump for more than two years, they still can’t accept him as a run-of-the-mill politician. And while there are aspects of Donald Trump and his presidency that are, to say the least, unusual, the media seem unable to distinguish between the abnormal and significant—firing the FBI director in the midst of an investigation into one’s presidential campaign, for example—and the commonplace.
Consider the fiscal deal President Trump struck with Democratic leaders in early September.
On September 6, the president held an Oval Office meeting with Vice President Pence, Treasury Secretary Mnuchin, and congressional leaders of both parties. He had to find a way to (a) raise the debt ceiling, (b) fund the federal government, and (c) spend money on hurricane relief. The problem is that a bloc of House Republicans won’t vote for (a) unless the increase is accompanied by significant budget cuts, which interferes with (b) and (c). To raise the debt ceiling, then, requires Democratic votes. And the debt ceiling must be raised. “There is zero chance—no chance—we will not raise the debt ceiling,” Senate Majority Leader Mitch McConnell said in August.
The meeting went like this. First House Speaker Paul Ryan asked for an 18-month increase in the debt ceiling so Republicans wouldn’t have to vote again on the matter until after the midterm elections. Democrats refused. The bargaining continued until Ryan asked for a six-month increase. The Democrats remained stubborn. So Trump, always willing to kick a can down the road, interrupted Mnuchin to offer a three-month increase, a continuing resolution that will keep the government open through December, and about $8 billion in hurricane money. The Democrats said yes.
That, anyway, is what happened. But the media are not satisfied to report what happened. They want—they need—to tell you what it means. And what does it mean? Well, they aren’t really sure. But it’s something big. It’s something spectacular. For example:
1. “Trump Bypasses Republicans to Strike Deal on Debt Limit and Harvey Aid” was the headline of a story for the New York Times by Peter Baker, Thomas Kaplan, and Michael D. Shear. “The deal to keep the government open and paying its debts until Dec. 15 represented an extraordinary public turn for the president, who has for much of his term set himself up on the right flank of the Republican Party,” their article began. Fair enough. But look at how they import speculation and opinion into the following sentence: “But it remained unclear whether Mr. Trump’s collaboration with Democrats foreshadowed a more sustained shift in strategy by a president who has presented himself as a master dealmaker or amounted to just a one-time instinctual reaction of a mercurial leader momentarily eager to poke his estranged allies.”
2. “The decision was one of the most fascinating and mysterious moves he’s made with Congress during eight months in office,” reported Jeff Zeleny, Dana Bash, Deirdre Walsh, and Jeremy Diamond for CNN. Thanks for sharing!
3. “Trump budget deal gives GOP full-blown Stockholm Syndrome,” read the headline of Tina Nguyen’s piece for Vanity Fair. “Donald Trump’s unexpected capitulation to new best buds ‘Chuck and Nancy’ has thrown the Grand Old Party into a frenzy as Republicans search for explanations—and scapegoats.”
4. “For Conservatives, Trump’s Deal with Democrats Is Nightmare Come True,” read the headline for a New York Times article by Jeremy W. Peters and Maggie Haberman. “It is the scenario that President Trump’s most conservative followers considered their worst nightmare, and on Wednesday it seemed to come true: The deal-making political novice, whose ideology and loyalty were always fungible, cut a deal with Democrats.”
5. “Trump sides with Democrats on fiscal issues, throwing Republican plans into chaos,” read the Washington Post headline the day after the deal was announced. “The president’s surprise stance upended sensitive negotiations over the debt ceiling and other crucial policy issues this fall and further imperiled his already tenuous relationships with Senate Majority Leader Mitch McConnell and House Speaker Paul Ryan.” Yes, the negotiations were upended. Then they made a deal.
6. “Although elected as a Republican last year,” wrote Peter Baker of the Times, “Mr. Trump has shown in the nearly eight months in office that he is, in many ways, the first independent to hold the presidency since the advent of the two-party system around the time of the Civil War.” The title of Baker’s news analysis: “Bound to No Party, Trump Upends 150 Years of Two-Party Rule.” One hundred and fifty years? Why not 200?
The journalistic rule of thumb used to be that an article describing a political, social, or cultural trend requires at least three examples. Not while covering Trump. If Trump does something, anything, you should feel free to inflate its importance beyond all recognition. And stuff your “reporting” with all sorts of dramatic adjectives and frightening nouns: fascinating, mysterious, unexpected, extraordinary, nightmare, chaos, frenzy, and scapegoats. It’s like a Vince Flynn thriller come to life.
The case for the significance of the budget deal would be stronger if there were a consensus about whom it helped. There isn’t one. At first the press assumed Democrats had won. “Republicans left the Oval Office Wednesday stunned,” reported Rachael Bade, Burgess Everett, and Josh Dawsey of Politico. Another trio of Politico reporters wrote, “In the aftermath, Republicans seethed privately and distanced themselves publicly from the deal.” Republicans were “stunned,” reported Kristina Peterson, Siobhan Hughes, and Louise Radnofsky of the Wall Street Journal. “Meet the swamp: Donald Trump punts September agenda to December after meeting with Congress,” read the headline of Charlie Spiering’s Breitbart story.
By the following week, though, these very outlets had decided the GOP was looking pretty good. “Trump’s deal with Democrats bolsters Ryan—for now,” read the Politico headline on September 11. “McConnell: No New Debt Ceiling Vote until ‘Well into 2018,’” reported the Washington Post. “At this point…picking a fight with Republican leaders will only help him,” wrote Gerald Seib in the Wall Street Journal. “Trump has long warned that he would work with Democrats, if necessary, to fulfill his campaign promises. And Wednesday’s deal is a sign that he intends to follow through on that threat,” wrote Breitbart’s Joel Pollak.
The sensationalism, the conflicting interpretations, the visceral language is dizzying. We have so many reporters chasing the same story that each feels compelled to gussy up a quotidian budget negotiation until it resembles the Ribbentrop–Molotov pact, and none feel it necessary to apply to their own reporting the scrutiny and incredulity they apply to Trump. The truth is that no one knows what this agreement portends. Nor is it the job of a reporter to divine the meaning of current events like an augur of Rome. Sometimes a cigar is just a cigar. And a deal is just a deal.
Remembering something wonderful
Not surprisingly, many well-established performers were left in the lurch by the rise of the new media. Moreover, some vaudevillians who, like Fred Allen, had successfully reinvented themselves for radio were unable to make the transition to TV. But a handful of exceptionally talented performers managed to move from vaudeville to radio to TV, and none did it with more success than Jack Benny, whose feigned stinginess, scratchy violin playing, slightly effeminate demeanor, and preternaturally exact comic timing made him one of the world’s most beloved performers. After establishing himself in vaudeville, he became the star of a comedy series, The Jack Benny Program, that aired continuously, first on radio and then TV, from 1932 until 1965. Save for Bob Hope, no other comedian of his time was so popular.
With the demise of nighttime network radio as an entertainment medium, the 931 weekly episodes of The Jack Benny Program became the province of comedy obsessives—and because Benny’s TV series was filmed in black-and-white, it is no longer shown in syndication with any regularity. And while he also made Hollywood films, some of which were box-office hits, only one, Ernst Lubitsch’s To Be or Not to Be (1942), is today seen on TV other than sporadically.
Nevertheless, connoisseurs of comedy still regard Benny, who died in 1974, as a giant, and numerous books, memoirs, and articles have been published about his life and art. Most recently, Kathryn H. Fuller-Seeley, a professor at the University of Texas at Austin, has brought out Jack Benny and the Golden Age of Radio Comedy, the first book-length primary-source academic study of The Jack Benny Program and its star.1 Fuller-Seeley’s genuine appreciation for Benny’s work redeems her anachronistic insistence on viewing it through the fashionable prism of gender- and race-based theory, and her book, though sober-sided to the point of occasional starchiness, is often quite illuminating.
Most important of all, off-the-air recordings of 749 episodes of the radio version of The Jack Benny Program survive in whole or part and can easily be downloaded from the Web. As a result, it is possible for people not yet born when Benny was alive to hear for themselves why he is still remembered with admiration and affection—and why one specific aspect of his performing persona continues to fascinate close observers of the American scene.B orn Benjamin Kubelsky in Chicago in 1894, Benny was the son of Eastern European émigrés (his father was from Poland, his mother from Lithuania). He started studying violin at six and had enough talent to pursue a career in music, but his interests lay elsewhere, and by the time he was a teenager, he was working in vaudeville as a comedian who played the violin as part of his act. Over time he developed into a “monologist,” the period term for what we now call a stand-up comedian, and he began appearing in films in 1929 and on network radio three years after that.
Radio comedy, like silent film, is now an obsolete art form, but the program formats that it fostered in the ’20s and ’30s all survived into the era of TV, and some of them flourish to this day. One, episodic situation comedy, was developed in large part by Jack Benny and his collaborators. Benny and Harry Conn, his first full-time writer, turned his weekly series, which started out as a variety show, into a weekly half-hour playlet featuring a regular cast of characters augmented by guest stars. Such playlets, relying as they did on a setting that was repeated from week to week, were easier to write than the free-standing sketches favored by Allen, Hope, and other ex-vaudevillians, and by the late ’30s, the sitcom had become a staple of radio comedy.
The process, as documented by Fuller-Seeley, was a gradual one. The Jack Benny Program never broke entirely with the variety format, continuing to feature both guest stars (some of whom, like Ronald Colman, ultimately became semi-regular members of the show’s rotating ensemble of players) and songs sung by Dennis Day, a tenor who joined the cast in 1939. Nor was it the first radio situation comedy: Amos & Andy, launched in 1928, was a soap-opera-style daily serial that also featured regular characters. Nevertheless, it was Benny who perfected the form, and his own character would become the prototype for countless later sitcom stars.
The show’s pivotal innovation was to turn Benny and the other cast members into fictionalized versions of themselves—they were the stars of a radio show called “The Jack Benny Program.” Sadye Marks, Benny’s wife, played Mary Livingstone, his sharp-tongued secretary, with three other characters added as the self-reflexive concept took shape. Don Wilson, the stout, genial announcer, came on board in 1934. He was followed in 1936 by Phil Harris, Benny’s roguish bandleader, and, in 1939, by Day, Harris’s simple-minded vocalist. To this team was added a completely fictional character, Rochester Van Jones, Benny’s raspy-voiced, outrageously impertinent black valet, played by Eddie Anderson, who joined the cast in 1938.
As these five talented performers coalesced into a tight-knit ensemble, the jokey, vaudeville-style sketch comedy of the early episodes metamorphosed into sitcom-style scripts that portrayed their offstage lives, as well as the making of the show itself. Scarcely any conventional jokes were told, nor did Benny’s writers employ the topical and political references in which Allen and Hope specialized. Instead, the show’s humor arose almost entirely from the close interplay of character and situation.
Benny was not solely responsible for the creation of this format, which was forged by Conn and perfected by his successors. Instead, he doubled as the star and producer—or, to use the modern term, show runner—closely supervising the writing of the scripts and directing the performances of the other cast members. In addition, he and Conn turned the character of Jack Benny from a sophisticated vaudeville monologist into the hapless butt of the show’s humor, a vain, sexually inept skinflint whose character flaws were ceaselessly twitted by his colleagues, who in turn were given most of the biggest laugh lines.
This latter innovation was a direct reflection of Benny’s real-life personality. Legendary for his voluble appreciation of other comedians, he was content to respond to the wisecracking of his fellow cast members with exquisitely well-timed interjections like “Well!” and “Now, cut that out,” knowing that the comic spotlight would remain focused on the man of whom they were making fun and secure in the knowledge that his own comic personality was strong enough to let them shine without eclipsing him in the process.
And with each passing season, the fictional personalities of Benny and his colleagues became ever more firmly implanted in the minds of their listeners, thus allowing the writers to get laughs merely by alluding to their now-familiar traits. At the same time, Benny and his writers never stooped to coasting on their familiarity. Even the funniest of the “cheap jokes” that were their stock-in-trade were invariably embedded in carefully honed dramatic situations that heightened their effectiveness.
A celebrated case in point is the best-remembered laugh line in the history of The Jack Benny Program, heard in a 1948 episode in which a burglar holds Benny up on the street. “Your money or your life,” the burglar says—to which Jack replies, after a very long pause, “I’m thinking it over!” What makes this line so funny is, of course, our awareness of Benny’s stinginess, reinforced by a decade and a half of constant yet subtly varied repetition. What is not so well remembered is that the line is heard toward the end of an episode that aired shortly after Ronald Colman won an Oscar for his performance in A Double Life. Inspired by this real-life event, the writers concocted an elaborately plotted script in which Benny talks Colman (who played his next-door neighbor on the show) into letting him borrow the Oscar to show to Rochester. It is on his way home from this errand that Benny is held up, and the burglar not only robs him of his money but also steals the statuette, a situation that was resolved to equally explosive comic effect in the course of two subsequent episodes.
No mere joke-teller could have performed such dramatically complex scripts week after week with anything like Benny’s effectiveness. The secret of The Jack Benny Program was that its star, fully aware that he was not “being himself” but playing a part, did so with an actor’s skill. This was what led Ernst Lubitsch to cast him in To Be or Not to Be, in which he plays a mediocre Shakespearean tragedian, a character broadly related to but still quite different from the one who appeared on his own radio show. As Lubitsch explained to Benny, who was skeptical about his ability to carry off the part:
A clown—he is a performer what is doing funny things. A comedian—he is a performer what is saying funny things. But you, Jack, you are an actor, you are an actor playing the part of a comedian and this you are doing very well.
To Be or Not to Be also stands out from the rest of Benny’s work because he plays an identifiably Jewish character. The Jack Benny character that he played on radio and TV, by contrast, was never referred to or explicitly portrayed as Jewish. To be sure, most listeners were in no doubt of his Jewishness, and not merely because Benny made no attempt in real life to conceal his ethnicity, of which he was by all accounts proud. The Jack Benny Program was written by Jews, and the ego-puncturing insults with which their scripts were packed, as well as the schlemiel-like aspect of Benny’s “fall guy” character, were quintessentially Jewish in style.
As Benny explained in a 1948 interview cited by Fuller-Seeley:
The humor of my program is this: I’m a big shot, see? I’m fast-talking. I’m a smart guy. I’m boasting about how marvelous I am. I’m a marvelous lover. I’m a marvelous fiddle player. Then, five minutes after I start shooting off my mouth, my cast makes a shmo out of me.
Even so, his avoidance of specific Jewish identification on the air is noteworthy precisely because his character was a miser. At a time when overt anti-Semitism was still common in America, it is remarkable that Benny’s comic persona was based in large part on an anti-Semitic stereotype—yet one that seems not to have inspired any anti-Semitic attacks on Benny himself. When, in 1945, his writers came up with the idea of an “I Can’t Stand Jack Benny Because . . . ” write-in campaign, they received 270,000 entries. Only three made mention of his Jewishness.
As for the winning entry, submitted by a California lawyer, it says much about what insulated Benny from such attacks: “He fills the air with boasts and brags / And obsolete, obnoxious gags / The way he plays his violin / Is music’s most obnoxious sin / His cowardice alone, indeed, / Is matched by his obnoxious greed / And all the things that he portrays / Show up MY OWN obnoxious ways.” It is clear that Benny’s foibles were seen by his listeners not as particular but universal, just as there was no harshness in the razzing of his fellow cast members, who very clearly loved the Benny character in spite of his myriad flaws. So, too, did the American people. Several years after his TV series was cancelled, a corporation that was considering using him as a spokesman commissioned a national poll to find out how popular he was. It learned that only 3 percent of the respondents disliked him.
Therein lay Benny’s triumph: He won total acceptance from the American public and did so by embodying a Jewish stereotype from which the sting of prejudice had been leached. Far from being a self-hating whipping boy for anti-Semites, he turned himself into WASP America’s Jewish uncle, preposterous yet lovable.W hen the bottom fell out of network radio, Benny negotiated the move to TV without a hitch, debuting on the small screen in 1950 and bringing the radio version of The Jack Benny Program to a close five years later, making it one of the very last radio comedy series to shut up shop. Even after his weekly TV series was finally canceled by CBS in 1965, he continued to star in well-received one-shot specials on NBC.
But Benny’s TV appearances, for all their charm, were never quite equal in quality to his radio work, which is why he clung to the radio version of The Jack Benny Program until network radio itself went under: Better than anyone else, he knew how good the show had been. For the rest of his life, he lived off the accumulated comic capital built up by 21 years of weekly radio broadcasts.
Now, at long last, he belongs to the ages, and The Jack Benny Program is a museum piece. Yet it remains hugely influential, albeit at one or more removes from the original. From The Dick Van Dyke Show and The Danny Thomas Show to Seinfeld, Everybody Loves Raymond, and The Larry Sanders Show, every ensemble-cast sitcom whose central character is a fictionalized version of its star is based on Benny’s example. And now that the ubiquity of the Web has made the radio version of his series readily accessible for the first time, anyone willing to make the modest effort necessary to seek it out is in a position to discover that The Jack Benny Program, six decades after it left the air, is still as wonderfully, benignly funny as it ever was, a monument to the talent of the man who, more than anyone else, made it so.
Review of 'The Transferred Life of George Eliot' By Philip Davis
Not that there’s any danger these theoretically protesting students would have read George Eliot’s works—not even the short one, Silas Marner (1861), which in an earlier day was assigned to high schoolers. I must admit I didn’t find my high-school reading of Silas Marner a pleasant experience—sports novels for boys like John R. Tunis’s The Kid from Tomkinsville were inadequate preparation. I must confess, too, that when I was in graduate school, determined to study 17th-century English verse, my reaction to the suggestion that I should also read Middlemarch (1871–72) was “What?! An 800-page novel by the guy who wrote Silas Marner?” A friend patiently explained that “the guy” was actually Mary Ann Evans, born in 1819, died in 1880. Partly because she was living in sin with the literary jack-of-all-trades George Henry Lewes (legally and irrevocably bound to his estranged wife), she adopted “George Eliot” as a protective pseudonym when, in her 1857 debut, she published Scenes from Clerical Life.
I did, many times over and with awe and delight, go on to read Middlemarch and the seven other novels, often in order to teach them to college students. Students have become less and less receptive over the years. Forget modern-day objections to George Eliot’s complex political or religious views. Adam Bede (1859) and The Mill on the Floss (1860) were too hefty, and the triple-decked Middlemarch and Deronda, even if I set aside three weeks for them, rarely got finished.
The middle 20th century was perhaps a more a propitious time for appreciating George Eliot, Henry James, and other 19th-century English and American novelists. Influential teachers like F.R. Leavis at Cambridge and Lionel Trilling at Columbia were then working hard to persuade students that the study of literature, not just poetry and drama but also fiction, matters both to their personal lives—the development of their sensibility or character—and to their wider society. The “moral imagination” that created Middlemarch enriches our minds by dramatizing the complications—the frequent blurring of good and evil—in our lives. Great novels help us cope with ambiguities and make us more tolerant of one another. Many of Leavis’s and Trilling’s students became teachers themselves, and for several decades the feeling of cultural urgency was sustained. In the 1970s, though, between the leftist emphasis on literature as “politics by other means” and the deconstructionist denial of the possibility of any knowledge, literary or otherwise, independent of political power, the high seriousness of Leavis and Trilling began to fade.
The study of George Eliot and her life has gone through many stages. Directly after her death came the sanitized, hagiographic “life and letters” by J.W. Cross, the much younger man she married after Lewes’s death. Gladstone called it “a Reticence in three volumes.” The three volumes helped spark, if they didn’t cause, the long reaction against the Victorian sages generally that culminated in the dismissively satirical work of the Bloomsbury biographer and critic Lytton Strachey in his immensely influential Eminent Victorians (1916). Strachey’s mistreatment of his forbears was, with regard to George Eliot at least, tempered almost immediately by Virginia Woolf. It was Woolf who in 1919 provocatively said that Middlemarch had been “the first English novel for adults.” Eventually, the critical tide against George Eliot was decisively reversed in the ’40s by Joan Bennett and Leavis, who made the inarguable case for her genuine and lasting achievement. That period of correction culminated in the 1960s with Gordon S. Haight’s biography and with interpretive studies by Barbara Hardy and W.J. Harvey. Books on George Eliot over the last four decades have largely been written by specialists for specialists—on her manuscripts or working notes, and on her affiliations with the scientists, social historians, and competing novelists of her day.
The same is true, only more so, of the books written, with George Eliot as the ostensible subject, to promote deconstructionist or feminist agendas. Biographies have done a better job appealing to the common reader, not least because the woman’s own story is inherently compelling. The question right now is whether a book combining biographical and interpretive insight—one “pitched,” as publishers like to say, not just at experts but at the common reader—is past praying for.
Philip Davis, a Victorian scholar and an editor at Oxford University Press, hopes not. His The Transferred Life of George Eliot—transferred, that is, from her own experience into her letters, journals, essays, and novels, and beyond them into us—deserves serious attention. Davis is conscious that George Eliot called biographies of writers “a disease of English literature,” both overeager to discover scandals and too inclined to substitute day-to-day travels, relationships, dealings with publishers and so on, for critical attention to the books those writers wrote. Davis therefore devotes himself to George Eliot’s writing. Alas, he presumes rather too much knowledge on the reader’s part of the day-to-day as charted in Haight’s marvelous life. (A year-by-year chronology at the front of the book would have helped even his fellow Victorianists.)
As for George Eliot’s writing, Davis is determined to refute “what has been more or less said . . . in the schools of theory for the last 40 years—that 19th-century realism is conservatively bland and unimaginative, bourgeois and parochial, not truly art at all.” His argument for the richness, breadth, and art of George Eliot’s realism—her factual and sympathetic depiction of poor and middling people, without omitting a candid representation of the rich—is most convincing. What looms largest, though, is the realist, the woman herself—the Mary Ann Evans who, from the letters to the novels, became first Marian Evans the translator and essayist and then later “her own greatest character”: George Eliot the novelist. Davis insists that “the meaning of that person”—not merely the voice of her omniscient narrators but the omnipresent imagination that created the whole show—“has not yet exhausted its influence nor the larger future life she should have had, and may still have, in the world.”
The transference of George Eliot’s experience into her fiction is unquestionable: In The Mill on the Floss, for example, Mary Ann is Maggie, and her brother Isaac is Tom Tulliver. Davis knows that a better word might be transmutation, as George Eliot had, in Henry James’s words, “a mind possessed,” for “the creations which brought her renown were of the incalculable kind, shaped themselves in mystery, in some intellectual back-shop or secret crucible, and were as little as possible implied in the aspect of her life.” No data-accumulating biographer, even the most exhaustive, can account for that “incalculable . . . mystery.”
Which is why Davis, like a good teacher, gives us exercises in “close reading.” He pauses to consider how a George Eliot sentence balances or turns on an easy-to-skip-over word or phrase—the balance or turn often representing a moment when the novelist looks at what’s on the underside of the cards.
George Eliot’s style is subtle because her theme is subtle. Take D.H. Lawrence’s favorite heroine, the adolescent Maggie Tulliver. The external event in The Mill on the Floss may be the girl’s impulsive cutting off her unruly hair to spite her nagging aunts, or the young woman’s drifting down the river with a superficially attractive but truly impossible boyfriend. But the real “action” is Maggie’s internal self-blame and self-assertion. No Victorian novelist was better than George Eliot at tracing the psychological development of, say, a husband and wife who realize they married each other for shallow reasons, are unhappy, and now must deal with the ordinary necessities of balancing the domestic budget—Lydgate and Rosamund in Middlemarch—or, in the same novel, the religiously inclined Dorothea’s mistaken marriage to the old scholar Casaubon. That mistake precipitates not merely disenchantment and an unconscious longing for love with someone else, but (very finely) a quest for a religious explanation of and guide through her quandary.
It’s the religio-philosophical side of George Eliot about which Davis is strongest—and weakest. Her central theological idea, if one may simplify, was that the God of the Bible didn’t exist “out there” but was a projection of the imagination of the people who wrote it. Jesus wasn’t, in Davis’s characterization of her view, “the impervious divine, but [a man who] shed tears and suffered,” and died feeling forsaken. “This deep acceptance of so-called weakness was what most moved Marian Evans in her Christian inheritance. It was what God was for.” That is, the character of Jesus, and the dramatic play between him and his Father, expressed the human emotions we and George Eliot are all too familiar with. The story helps reconcile us to what is, finally, inescapable suffering.
George Eliot came to this demythologized understanding not only of Judaism and Christianity but of all religions through her contact first with a group of intellectuals who lived near Coventry, then with two Germans she translated: David Friedrich Strauss, whose 1,500-page Life of Jesus Critically Examined (1835–36) was for her a slog, and Ludwig Feuerbach, whose Essence of Christianity (1841) was for her a joy. Also, in the search for the universal morality that Strauss and Feuerbach believed Judaism and Christianity expressed mythically, there was Spinoza’s utterly non-mythical Ethics (1677). It was seminal for her—offering, as Davis says, “the intellectual origin for freethinking criticism of the Bible and for the replacement of religious superstition and dogmatic theology by pure philosophic reason.” She translated it into English, though her version did not appear until 1981.
I wish Davis had left it there, but he takes it too far. He devotes more than 40 pages—a tenth of the whole book—to her three translations, taking them as a mother lode of ideational gold whose tailings glitter throughout her fiction. These 40 pages are followed by 21 devoted to Herbert Spencer, the Victorian hawker of theories-of-everything (his 10-volume System of Synthetic Philosophy addresses biology, psychology, sociology, and ethics). She threw herself at the feet of this intellectual huckster, and though he rebuffed her painfully amorous entreaties, she never ceased revering him. Alas, Spencer was a stick—the kind of philosopher who was incapable of emotion. And she was his intellectual superior in every way. The chapter is largely unnecessary.
The book comes back to life when Davis turns to George Henry Lewes, the man who gave Mary Ann Evans the confidence to become George Eliot—perhaps the greatest act of loving mentorship in all of literature. Like many prominent Victorians, Lewes dabbled in all the arts and sciences, publishing highly readable accounts of them for a general audience. His range was as wide as Spencer’s, but his personality and writing had an irrepressible verve that Spencer could only have envied. Lewes was a sort Stephen Jay Gould yoked to Daniel Boorstin, popularizing other people’s findings and concepts, and coming up with a few of his own. He regarded his Sea-Side Studies (1860) as “the book . . . which was to me the most unalloyed delight,” not least because Marian, whom he called Polly, had helped gather the data. She told a friend “There is so much happiness condensed in it! Such scrambles over rocks, and peeping into clear pool [sic], and strolls along the pure sands, and fresh air mingling with fresh thoughts.” In his remarkably intelligent 1864 biography of Goethe, Lewes remarks that the poet “knew little of the companionship of two souls striving in emulous spirit of loving rivalry to become better, to become wiser, teaching each other to soar.” Such a companionship Lewes and George Eliot had in spades, and some of Davis’s best passages describe it.
Regrettably, Davis also offers many passages well below the standard of his best—needlessly repeating an already established point or obfuscating the obvious. Still, The Transferred Lives is the most formidably instructive, and certainly complete, life-and-works treatment of George Eliot we have.