Every once in a while, we come upon an event of seemingly minor import which, on reflection, turns out to…
Every once in a while, we come upon an event of seemingly minor import which, on reflection, turns out to betoken deep and problematic truths about our culture. The “Patenting of Life” decision is such a significant event.
On June 16, 1980, the Supreme Court of the United States ruled that a living microorganism was patentable matter, under the provision of patent laws enacted by Congress in 1952. In 1972, Ananda Chakrabarty, a microbiologist at the University of Illinois, had filed patent application, assigned to the General Electric Company, asserting multiple claims related to a novel bacterial strain that he had obtained with the aid of techniques of genetic engineering, a strain capable of degrading many components of crude oil and thus potentially useful in the biological control of oil spills. In addition to readily granted process claims for the method of producing the bacterium, and claims relating to the mode of carrying such bacteria to water-borne oil spills, Chakrabarty claimed patent rights to the bacteria themselves. This last claim, at first rejected by the patent examiner and then by the Patent Office Board of Appeals, was finally granted on appeal by the United States Court of Customs and Patent Appeals in 1979, in the decision affirmed by a narrow five-four vote of the Supreme Court a year later (Diamond v. Chakrabarty, 447 U.S. 303).
The case attracted considerable attention, but the Court’s decision fell short of the momentous ruling some had anticipated. For one thing, the Court was divided. For another, both sides agreed that the question before them was simply “a narrow one of statutory interpretation,” requiring the Court to construe the language of that section of the patent law which defined patentable matter. The Court’s opinion, and the dissent, were largely technical. Thus, readers of the opinion who looked for large philosophical dicta about man’s art and living nature or about genetic engineering came away disappointed. Alas, it looked as if the Court was, for a change, being simply judicious, doing no more than its proper work.
Yet the decision was not inconsequential. Indeed, it has already contributed to numerous recent practices. Patent claims are now pending for other living microorganisms, as well as for animal cell lines propagated in tissue culture, allegedly valuable for uses ranging from a cheaper means of making penicillin to novel treatments for specific cancers. Genetic-engineering firms are springing up all around. Academic molecular biologists are being courted by industry, with astounding financial incentives. Major grants for genetic-engineering research to universities have been given by industries in exchange for patent rights to any resulting useful and profitable discoveries. Under such an agreement, Hoechst, the German chemical company, has just given $50 million for a new genetic-engineering institute to Harvard Unversity, which, after considerable faculty opposition, had only recently abandoned plans to form its own genetic-engineering company. Many industries are tooling up in anticipation of the flood of new organisms and cell lines to be brought into being with the aid of human ingenuity, spurred on by our ingenious stroke to encourage genius, the patent laws. True, the art of genetic engineering was born and would grow without the Chakrabarty decision. But there is no question that it will now grow much, much faster.
But the Chakrabarty decision is useful in yet another, perhaps more fundamental, respect. It is useful for thought, for reflection on the relation between modern science and politics, and between science and the American polity in particular, especially as that relation is embodied in and exemplified by the patent laws. Indeed, the Chakrabarty case provides a wonderful mirror in which we can see fundamental features of the American polity, and therewith of modernity itself, and discern some of its deeper tensions: the relation of private interests or rights and the common good; the purposes of science and thought and their relation to practice and to the public interest; and, finally, the prevailing view of man’s place in and attitude toward the natural world. Before looking into that mirror, we need to describe some contours of the broader background.
Science in the public interest is a guiding intention of modern science and has been since its origins in the 17th century. Though we hear much about the distinction between pure and applied science—and I too shall distinguish them later—we must begin by emphasizing the essentially practical and social intention of modern science as such. Unlike ancient science, which sought knowledge of what things are, to be contemplated as an end-in-itself satisfying to the knower, modern science seeks knowledge of how things work, to be used as a means for the relief and comfort of all humanity, knowers and non-knowers alike. Standing on the threshold of the new science of mathematical physics, Descartes appeals for popular support of his researches by announcing the good news of knowledge “very useful in life”:
[S]o soon as I had acquired some general notions concerning Physics . . . I believed that I could not keep them concealed without greatly sinning against the law which obliges us to procure, as much as in us lies, the general good of all mankind. For they caused me to see that it is possible to attain knowledge which is very useful in life, and that, instead of that speculative philosophy which is taught in the Schools, we may find a practical philosophy by means of which, knowing the force and the action of fire, water, air, the stars, heaven, and all the other bodies that environ us, as distinctly as we know the different crafts of our artisans, we can in the same way employ them in all those uses to which they are adapted, and thus render ourselves the masters and possessors of nature. This is not merely to be desired with a view to the invention of an infinity of arts and crafts which enable us to enjoy without any trouble the fruits of the earth and all the good things which are to be found there, but also principally because it brings about the preservation of health, which is without doubt the chief blessing and the foundation of all other blessings in this life. For the mind depends so much on the temperament and disposition of the bodily organs that, if it is possible to find a means of rendering men wiser and cleverer than they have hitherto been, I believe that it is in medicine that it must be sought. (Discourse on Method, Part VI. Emphasis added.)
The announced goal of the new science is the mastery and possession of nature, and the purposes of mastery are humanitarian: the conquest of external necessity, the promotion of bodily health and longevity, the provision of psychic peace or a new kind of wisdom.
Even the notions and ways of science manifest a conception of knowledge for the sake of power: nature is conceived mechanistically, and explanation is in terms of efficient or moving causes; hidden truths are gained by acting on nature, i.e., through experiment; inquiry is made “methodical,” through the imposition of order and schemes of measurement “made” by the intellect; knowledge, embodied in laics rather than theorems, becomes “systematic” under the rules of a new mathematics expressly invented for this purpose. Modern science rejects, as meaningless or useless, questions that cannot be answered by application of the method. In all these fundamental ways, modern science has a practical cast. This remains true of the science practiced even by those great scientists who are driven by curiosity and the desire for truth and who have themselves no interest in that mastery and possession of nature for which science is largely esteemed by the rest of us.
Though essentially linked to practice, modern science is, in certain important respects, morally neutral. It does not itself seek knowledge of the good. Indeed, it looks upon nature, its object, as neutral and indifferent to the good or the beautiful. Moreover, the technical power it yields can be used for good or ill. Nevertheless, modern science is guided overall by an ethical—if prideful—intention: a lifting up of downtrodden humanity, a reversal of the curses laid upon Adam and Eve, and, ultimately, a restoration of the tree of life, by means of the tree of knowledge. Never mind the question how a science invincibly ignorant of and in principle skeptical about standards of better and worse can know how to do good for mankind. The new humanitarians simply point to the seemingly self-evident truth that life becomes better as it becomes less poor, nasty, brutish, and short.
Gradually, and increasingly as it began to make good its promise of technological fruit from the tree of useful knowledge, science was welcomed into partnership with the political community. Yet thoughtful men disagreed sharply about how science and the useful arts would and should relate to morals and politics. Close to one extreme was the view that popular enlightenment—and particularly the teachings of modern science—undermined ruling opinions and beliefs, especially religious beliefs, necessary for a good regime, and that unbridled progress would lead to luxury, the liberation and inflation of vain and foolish desires, and the debasement of morals and taste. Though they welcomed science’s contributions to health and plenty, these thinkers argued the need for settled laws, customs, and mores to restrain the turbulent and licentious souls of men. In the absence of such restraints, the conquest of nature without could enslave us to unruly nature within.
Even Francis Bacon, perhaps the greatest proponent of the marriage of science and politics, understood that the novelty sought by the former was not always congenial to the stability required by the latter. Bacon’s image of the best community, presented in his New Atlantis, does indeed award a central place to Baconian science: the jewel and lantern of the kingdom is a prodigious, state-supported, scientific research foundation, called Solomon’s House or the College of the Six Days Works (which, by the way, artfully creates new species through genetic manipulation). But the community is not enlightened. The populace has little access to the scientific goings-on, and the scientists practice self-censorship to avoid publicizing dangerous knowledge. A benevolent state, with the help of or perhaps under the direction of the scientists, apparently closely regulates the lives of its inhabitants by means of austere rituals, state-supported (albeit tolerant) religion, and—one suspects—perhaps even some scientifically-based means of behavior modification. According to Bacon, the mixture of science and politics, though desirable and even urgent, was potentially explosive and needed delicate handling.
In contrast, some of the Enlightenment thinkers of the 18th century and their descendants were much more sanguine about the easy compatibility of science and society. The most optimistic ones prophesied an unlimited and coupled progress of science and morality: the progress of science and technology would conquer necessity and alleviate human misery, and man thus emancipated from nature’s harsh and cruel necessities would flower morally into the good creature only his neediness prevents him from being. Man once liberated and enlightened, the external restraints imposed on him by law, mores, and religion would eventually become unnecessary. In the end, the state would wither away; politics, the rule over men, would be replaced by administration, the management of things. Our various species of Marxism are the lineal descendants of this messianic view of human perfectibility based on progress in the arts and sciences.
To summarize, whereas pre-modern political thinkers and statesmen placed their trust in law and morals, and doubted the ethical and social benefits of inquiry, modern science, devoted to the public good, found a political home and able defenders in modern, liberal regimes. Nevertheless, the proper balance and relation between science-technology and law or morals, between change and stability, remained an open question.
The founders of the American republic, though influenced by optimistic Enlightenment thought, were hardly utopians; they pursued a middle course. They knew human nature well enough not to underestimate the crucial importance of good laws, education, and also religion for the preservation of decency and public-spiritedness. But they also appreciated fully the promise of science. The American republic is, to my knowledge, the first regime explicitly to embrace scientific and technical progress and officially to claim its importance for the public good. The United States Constitution, which is silent on education and morality, speaks up about scientific progress. It does so in the course of defining the powers of Congress (Article I, Sec. 8):
The Congress shall have Power . . . To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
“To promote the Progress of Science and the useful Arts.” It is curious that this provision has come to be known as the Copyright and Patent Provision rather than the Provision on Progress in the Sciences and Useful Arts; for such progress is the explicit goal and purpose of the congressional power to enact the copyright and patent laws. These statutes, which we think of largely as protecting so-called intellectual property, were in the first instance thought of as useful to scientific and technical progress.
But progress was not itself the final end. Congress was given power to promote the useful arts, not the useless ones (e.g., the liberal arts or the fine arts). In the Federalist (No. 43), Madison speaks of the unquestionable utility of this congressional power to promote progress, and the context suggests that by its utility he means its usefulness to the public good.1 From this we infer that the useful arts and sciences were meant to be subordinated to, and in the service of, the well-being of the nation. Not progress for progress’s sake, but progress that might serve the enduring and unchanging goals set forth in the Preamble to the Constitution, among them, to provide for the common defense and promote the general welfare, and thus, indirectly, to establish justice, to insure domestic tranquility, and to secure the blessings of liberty to ourselves and our posterity. The American republic embraces change, but in the service of duration; science, but in the service of liberty and justice, defined by law. In this respect the Copyright and Patent Provision is perhaps only the most obvious example of the American way. For the entire Constitution is a deliberate embodiment of balanced tensions between science and law and between stability and novelty, inasmuch as the Founders self-consciously sought to institutionalize the improvements of the “science of politics,” and in such a way that would stably perpetuate openness to future change.
How best to promote the arts and sciences? How to induce talented men to behave for the common good? The Constitution once again makes a clear and measured choice: private enterprise, governed and protected by law. Other possibilities were considered by the Convention. Madison had proposed, among the powers of Congress, “To establish a university” and “To encourage by premiums and provisions, the advancement of useful knowledge and discoveries.” Yet the Convention rejected the establishment of a national university and the federal support of science through prizes and provisions, and adopted, apparently without debate, the provision which encourages progress by adding the fuel of interest to the fire of genius.
This reliance on self-interest and the motive of gain might be attributable to the Founders’ hard-headed appraisal of the selfish tendencies of most human beings; and cynics have sometimes attributed such motives to the Founders themselves. But a careful look at the constitutional text indicates that the patent provision is a matter not only of calculation but also of justice. Congress is empowered to secure, that is, to make safe and protect, a right of authors and inventors to the fruits of their genius and energy, a right which, by implication, antedates the Constitution. Indeed, this “Right of Authors and Inventors to their respective Writings and Discoveries” is the first and only right mentioned in the body of the original Constitution of 1787 (that is, before the Amendments and Bill of Rights). To quote Madison: “The copyright of authors has been solemnly adjudged in Great Britain to be a right of common law. The right to useful inventions seems with equal reason to belong to the inventors.”
There is justice, then, in the claims of copyright and patent. To be sure, doing justice will be complicated if the patent prize is awarded only for finishing first in a race in which the winner ran only the last leg of a long relay, tens or hundreds having assisted him. Nevertheless, everyone sees the at least prima facie claim that justice requires protecting the labors of the imaginative and industrious against theft by the sly and lazy. If theft of property is wrong, the right of patent is right, at least in some sense. The foundation of the patent law is not only utilitarian, but also ethical.
Indeed, it is ethical also in its consequences for character. The law not only protects individual rights and prevents injustice; it also rewards and encourages the energetic cultivation of the mind and the intellectual virtues of inventiveness, order, and precision, and promotes in publicly beneficial ways the moral virtues of ambition and industry. These likely consequences were in fact very important to many of the Founders, and their decision to fuel private enterprise was partly based on these hoped-for improvements in character and mind. To be sure, the mind has other and higher objects than inventions, and ambition and industry do not exhaust the moral virtues. Still, a respect for the human mind and an appreciation of efforts to realize its potential are built into our constituting law. One errs to see here only greed and base calculation.
Patent laws serve the public interest at the same time as they protect private rights. The community gains publication, likely development of inventions, a share in the resulting prosperity, and, should it desire it, some legislative hand on the throttle of progress. The patent laws of 1790, enacted by the first Congress, thus established what can rightly be called an ethical-social contract of science in the public interest. In order to secure their rights, authors and inventors had to disclose, that is, make gift to the public of their findings: no protection without publication. (In choosing to promote the widest possible publication, the Founders showed less concern than Bacon for the problem of dangerous knowledge, a matter to which we shall return.) Moreover, the exclusive right was obtained for only a limited period, to encourage prompt development and production of new inventions; thus, society might reap the benefits of innovation more quickly than if the right were of unlimited duration. All in all, the Copyright and Patent Provision and the patent law are most ingenious, public-spirited, and just inventions—themselves worthy of patent protection. Madison praised the former, saying, “The public good fully coincides with the claims of individuals.” Abraham Lincoln (in his lecture on “Discoveries and Inventions,” February 11, 1859) listed the latter, along with the arts of writing and printing and the discovery of America, among those few inventions and discoveries in the history of the world, most valuable “on account of their great efficiency in facilitating all other inventions and discoveries.”
Time has vindicated these judgments. We are showered on all sides by countless benefits of this far-sighted invention of the American mind, which harnessed science and artful intelligence to the carriage of state and which kept it moving by means of the carrot of self-interest. It would seem hypocritical and, what is worse, ungrateful, to question this arrangement, all the more so in the light of the marvelous contributions to our health and prosperity that we can now obviously expect from the industrial exploitation of learning how to get microorganisms to do our manufacturing.
And yet, honesty compels us to point out certain peculiarities of this arrangement, peculiarities that might eventually give rise to serious difficulties, not only for the union of science and the American polity, but also for each of the partners taken separately. First, it should be observed that the contract formed by the patent law brings together, in stressful if fertile union, certain contradictory, or at least inhospitable, partners and principles: self-interest and common good; monopoly and liberty; the ownership of ideas and the sharability or publicity of speech and thought. The patent law seeks to promote the common good by licensing private interest, thus running the risk of fostering a crass selfishness that in any particular instance might sacrifice public interest to private gain and that eventually renders men generally indifferent, or even hostile, to the common good. It seeks indirectly, by means of progress and prosperity, to safeguard political liberty, but it does so by legitimating monopoly—albeit of limited duration—which is the antithesis of liberty. It rewards publication and, therefore, presupposes the sharability of thoughts and ideas, yet it does so by licensing the private ownership of these works of the mind.
Second, there is the already noted built-in tension between progress and stability. Indeed, the very idea of a patent law is something of an oxymoron: it is a hybrid of two opposing principles, change and order, that live always in tension with each other. Law as law stands for order and stability. It not only sets limits and restrains undesirable conduct. It also embodies our opinions, albeit our variable opinions, about what is just and good. Though subject to change, law as such points to what is permanent. A law to encourage progress is thus, at bottom, a paradoxical law. In a way, though it promotes change, as an expression of legitimacy the patent law still, at least formally, accords primacy to order. Absent such a law, innovation would lack legal protection and even legitimacy. Thus, the supremely ingenious invention of the patent law could not itself be patentable, due to an absence of law.
In principle, the Constitution goes further than this formal subordination. The constitutional Patent Provision, we have suggested, maintains a balance by subordinating progress to the unchanging, substantive goals of justice and liberty. But in practice, the patent law threatens to tip the scale in favor of runaway change. Increasingly encouraged, the horses of technological progress break into full gallop, seemingly out of anyone’s control, and the community is left with the difficult task of adjusting after the fact to the paths traveled and the changes wrought. Sometimes, when progress comes before the bar—as in the present case—even learned men judicially charged with upholding the law choose instead injudiciously to redefine it, in order to keep pace with novelty.
Finally, there are potential strains in the American polity’s contract with science, insofar as the polity accepts without reservations the methods, principles, and purposes of modern natural science. For example, the practice of experimentation, when extended to human subjects, often places science on a collision course with the rights of individuals. Worse yet, our fundamental political principles, the natural rights enunciated in the Declaration of Independence, acquire no support from the “nature” described by the laws of physics and chemistry. The “nature” of the physicists, to say the least, offers no ground for rights, let alone for the belief that we have these rights as endowments from our Creator. Further, in biology, the teachings of evolution seem to deny to human beings any special place in the whole. And when, encouraged by these teachings, the project to relieve man’s estate through mastery and possession of nature approaches making fundamental alterations in human nature itself, Americans—everyone—must begin to wonder whether the goals and presuppositions of the entire venture are sound and even whether modern science’s notions of knowledge and nature are simply and unqualifiedly true.
Curiously, the recent Supreme Court decision in the Chakrabarty case points up all these difficulties, notwithstanding the narrow question it decided and the limited character of its holding. Various commentators have raised broader questions about the meaning of the Chakrabarty decision and its consequences: questions about the desirability of genetic engineering, about the dangers of the further commercialization of science, and about the propriety of owning an entire living species. By examining each of these questions, we shall be led to discover some of the limitations of the contract between modern science and the American polity, as it is embodied in our patent law.
First: does the protection of private rights and interests in new discoveries and inventions always serve the public good? Is the awarding of patents always in the public interest? The answer to these questions necessarily turns, in any given case, on the nature of the particular discovery and invention. More generally, it turns on the question, Is progress or technical innovation always in the public interest? If the innovations are simply or largely beneficial, their encouragement through award of patents would still reflect harmony between private interest and common good. But what about dangerous discoveries and inventions? Does the community serve its best interests when it stimulates their development through patent grants? Might not even the publication of the existence of the dangerous invention prove harmful to the public interest? What is the American polity’s remedy for this problem of dangerous innovation?
Genetic engineering is regarded by many as just such a dangerous technology, and one posing no ordinary dangers. For in human genetic engineering, the previous beneficiary of the power to alter nature becomes himself subject to that power and those alterations. The power to engineer the engineer sharply raises questions about the meaning and limits of progress.
It was, I am sure, concerns about the dangers of genetic engineering, especially human genetic engineering, that gave the Chakrabarty case such wide interest. In argument before the Supreme Court, grave risks allegedly associated with genetic manipulation were cited as a reason why patent should be denied. The majority opinion states:
We are told that genetic research and related technological developments may spread pollution and disease, that it may result in a loss of genetic diversity, and that its practice may tend to depreciate the value of human life.
These opinions were advanced and are held by reputable scientists, among others, whose concerns range from fears about new biohazards to doubts about our possessing the wisdom requisite to redesign human genes or to interfere designedly in the course of evolution. There is, to be sure, much disagreement about the degree to which these fears and doubts are warranted, but there is no doubt that the matters at stake are serious. The Court indeed acknowledged the seriousness of such considerations, but held them nonetheless irrelevant to its decision, partly on the ground that its negative decision would not prevent such research, partly on the ground that it lacked the competence or the constitutional authority to decide how much and what kind of genetic research our society should foster.
The Court’s judgment seems to me to be sound. Under our Constitution, it is for the legislature to decide such questions, and the Courts ought not to rewrite the rules. Further, denial of individual patent applications seems a poor way for society to decide questions about allegedly dangerous research and technology. Yet this very fact calls attention to a defect in the relation between science and society, insofar as that relation is largely defined by or exemplified in the contract of the patent laws. The patent laws assume that innovations proposed by inventors are, because innovative and useful to some, simply good for the community at large. Instituted well before many people recognized the communal price everyone pays for certain kinds of technological change, they reflect a once little-questioned faith in progress. Thus, as they are instruments for encouraging innovation, they are poorly designed for regulating or controlling it. It is no surprise that the mechanism for making the individual horses run turns out to be incapable of slowing them down, should one later discover that, as a team, they are in danger of running away with the rider.
And yet one wonders. The Court says, “Whether respondent’s claims are patentable may determine whether research efforts are accelerated by the hope of reward or slowed by the want of incentives, but that is all” (emphasis added). But that “all” is not nothing. True, something unpatentable could still be legal and profitable; one cannot assume that lack of a patent will prevent development. Nevertheless, the awarding of patents is a communal hand on the throttle, a gentle hand to be sure, but by no means ineffective. Further, it is, as it happens, a hand less threatening to science than the legislative power to prohibit and make illegal. Moreover, one might argue that, in the statutory criterion of utility, the Patent Office has been given the power, indeed the duty, to judge the social merits of a given invention in deciding whether to encourage its development. According to the patent laws, only useful inventions may be patented, and rightly so, if some usefulness to the public good is society’s share of the patenting contract. Though it is generally sound to believe that fueling private incentives serves the public good, allowing the market to decide “usefulness,” this is notoriously not always the case (especially if by “public good” we mean more than economic growth).
How does the Patent Office understand “the useful”? In general, its presumption being to favor development, any definable “use” is sufficient. But is this always sound? How should it judge the usefulness of a manufacture that has obvious and likely misuses and abuses, along with some clear and well-defined use? For example, how should it judge the usefulness of a perfected pleasure drug, admittedly beneficial in the treatment of depression, but almost certainly subject to widespread social or political abuse? What about improved devices for subliminal advertising? Or new and improved miniature recording and photographic devices that would no doubt increase snooping and invasions of privacy? Should the inventor of selective spermicides and his financial backers be able to decide by patenting that our society should be able to practice sex-selection of offspring?
One would think that a well-developed and nuanced doctrine of “utility” might already be embodied in court decisions involving patent claims. But a brief survey of the legal literature shows otherwise. True, precedent denies patentable “utility” to inventions whose contemplated use is for purposes deemed illegal or immoral (bogus coin detectors for slot machines, for example) or which always cause bodily harm to the user when used in the intended manner (for example, a drug effective against depression but toxic to the point of lethality). “A composition unsafe for use by reason of extreme toxicity to point of immediate death under all conditions of its sole contemplated use in treating disease of human organisms would not be ‘useful’ within the meaning of patent laws” (emphasis added) is the limited, almost grudging, concession to such considerations made by the United States Court of Customs and Patent Appeals, in a well-known case, Application of Anthony. In that case (1969), the Court in fact argued that, short of such uniform catastrophe, safety as an ingredient of utility is a relative matter, and overruled the U.S. Patent Office which had denied patent for an anti-depressant drug, Monase, a drug voluntarily taken off the commercial market by its manufacturers because of a dozen fatalities reported among its many users.
Commenting in a footnote on the more general question of social harm from inventions capable of affecting public morals, health, and order, the Court in Anthony endorses a turn-of-the-century U.S. Circuit Court opinion (Fuller v. Berger et al.): an invention is “useful within the meaning of the law, if it is used (or is designed and adapted to be used) to accomplish a good result, though in fact it is oftener used (or is as well or even better adapted to be used) to accomplish a bad one.”2
During the 1960’s this doctrine—that likely abuse does not negate use—caused some embarrassment to the Patent Office, indeed, as a consequence of its function as publicist. A patent had been earlier awarded for LSD, shortly before its hallucinogenic properties were known. When the drug found its way into street use, the Patent Office helped a whole generation learn how to manufacture it, being obliged to divulge the details of its chemical synthesis to anyone who requested them. The Patent Office did so until the supply of printed matter about LSD was exhausted.
Perhaps such precedents reflect our long-standing and naive belief in the beneficence, or at least the innocence, of all innovation. Would a similar court today allow a patent for Monase, for the Colt revolver, or for LSD? Perhaps the future might bring us a more complex and refined doctrine of utility, one which was willing to make balancing judgments in protecting the public’s side of the contract. But, at least for now, it seems that any licit and non-lethal “use” suffices for the statutory test of “utility,” all likely abuses notwithstanding. Under these circumstances, our second thoughts confirm our first: the Court in Chakrabarty was right in not allowing concerns about the possible dangers of genetic engineering to influence its decision.
If patent decisions do not and cannot consider these broad questions of use and possible abuse, if restriction of patents is an inappropriate mechanism for setting the pace in the realm of potentially dangerous technologies, the contract between science and society needs additional clauses. To be sure, many already exist—e.g., Regulations of the Food and Drug Administration, Guidelines for the Use of Human Subjects in Research, etc. Yet most of these regulations deal only with questions of health and safety. We have few means of assessing and regulating with regard to the massive consequences of new technologies to our mores, institutions, and ways of life. With the vast powers, now being accumulated, that would bring the mastery of nature to bear on human nature itself, some have begun to wonder whether the simply permissive contract between innovators and society needs to be renegotiated.
Such a response seems to me excessive. We have, and will continue to have, a commitment to scientific and technological progress. We have reason to expect that the social and political results of such progress will continue to be largely beneficial, and that the union of science and politics cemented by the patent laws will continue to serve us well. It would be foolish to dismantle our instruments of progress just because they require some additional devices and mechanisms. It would be foolish to shackle our accelerator just because it does not function as a brake. The difficult question—one which we have only begun to face—is what kinds of political arrangements and institutions are best suited to reviewing the direction and pace of certain “dangerous” developments and to applying the brakes, if necessary. One thing seems clear: the responsibility lies with the legislature. Courts may raise questions about the need for brakes, but it must be Congress that applies them. How to do so is, of course, the difficult question. The task of inventing suitable braking mechanisms will require even more ingenuity than the invention of the patent laws. We are all aware of the serious risks and costs of governmental regulation. Yet unless some means of control are found for those technologies reasonably regarded as potentially dangerous to the public interest— and, for the long run, who can be certain about genetic engineering?—the motives of gain, when added to ingenuity and stimulated by patent protection, are likely to subvert the common good. With big money fanning the flames—consider the difficulties in regulating the tobacco or automobile industries—the fire of innovation could be out of control before anyone gets warm enough to worry.
We have argued that the job of brakeman does not belong to the Courts. But it does not follow that the Courts should be free to remove or revise brakes applied by the legislature. The Court should be neither the partisan nor the opponent of progress; it is, instead, the guardian of law and, implicitly, a teacher of law-abidingness. The Court in Chakrabarty rightly resisted encroaching upon the legislative domain in refusing to become society’s arbiter regarding genetic engineering; but how well did it discharge its own task of guarding the law? An examination of the decision reveals that the Court showed itself partial to progress, with the so-called conservative members leading the way.
The Court was asked to decide not whether living organisms ought to be, but only whether they are patentable matter, as this is defined by statute. The relevant portion of the patent law (32 United States Code §101) provides:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the condition and requirements of this title.
To decide affirmatively, the Court majority had to construe both the novel microorganism and the operative clause in the statute, which defines patentable matter as “any new and useful process, machine, manufacture, or composition of matter,” such that a living bacterium could be understood to be either a “manufacture” or a “composition of matter” or both; the Court minority argued that “manufacture” or “composition of matter” were not intended by Congress to encompass living organisms. Though the majority opinion does not directly argue that the microorganism in question is, say, a composition—i.e., a putting together—of matter, it treats its aliveness as irrelevant to its patentability. It ignores altogether the nature of the object, arguing: “In choosing such expansive terms as ‘manufacture’ and ‘composition of matter,’ modified by the comprehensive ‘any,’ Congress plainly contemplated that the patent laws would be given wide scope.” Finally, the Court argues that novelty, utility, and the fact that Chakrabarty’s discovery “is not nature’s handiwork but his own” render the bacterium patentable subject matter and Chakrabarty and General Electric the proud owners of “his” new species.
I happen to think the Court opinion mistaken in its reading of the statute. The terms “manufacture” and “composition of matter” go back to Jefferson’s 1793 patent law, and Congress has retained them without change in all subsequent revisions. (In another category of patentable matter, when Congress became dissatisfied with Jefferson’s concepts, it replaced his term: the present “process” is a replacement for Jefferson’s “art.”) Did Jefferson regard a living organism as a mere “composition of matter”? Certainly, in the ordinary sense of these terms, no one should. The majority goes too far in extrapolating from its correct belief that Congress “contemplated that the patent laws would be given wide scope.” It sustains the opinion that Congress intends statutory subject matter to “include anything under the sun made by man.” But if so, why did Congress in fact make and preserve categorical distinctions among the kinds of patentable man-made things—“processes, machines, manufactures, and compositions of matter”—distinctions that would be unnecessary if “anything under the sun,” so long as of artificial origin, were the sufficient mark of patentable matter—of course, along with novelty, utility, and non-obviousness? And why, the minority rightly asks, would Congress enact separate plant patent laws (in 1930 and again in 1970) to permit patenting of new plant varieties, if Congress understood “manufacture” and “composition of matter” as broadly as the Court majority now claims? Indeed, as the minority again points out, in the 1970 Plant Patent Act Congress had specifically excluded bacteria from patentability under the Act: “Congress has included bacteria within the focus of its legislative concern, but not within the scope of patent protection. . . . Congress, assuming that animate objects as to which it had not specifically legislated could not be patented, excluded bacteria from the set of patentable organisms.”
The Court majority ignored these specific facts about the written statutes. It took its stand instead on what it calls the “broad general language” of the patent laws and on its own construction of the legislative intent: “The subject-matter provisions of the patent law have been cast in broad terms to fulfill the constitutional and statutory goal of promoting ‘the Progress of Science and useful Arts’ with all that means for the social and economic benefits envisioned by Jefferson.” It is an insult to Jefferson to suggest that his friendship for progress made him imprecise and vague as a legislator. He said what he meant and he meant what he said, always careful about his choice of words. Courts would do less mischief if they treated all law and legislatures as if they meant what they in fact explicitly said. The present Court’s love of innovation extends to its reading of law. We must wonder whether such “progressive” jurisprudence is not too high a price for progress.
If patenting and the patent laws do not always serve the public or political good, are they simply good for science? As we shall see, this is best understood as a special case of the question, Is practice always good for theory?, and it ultimately invites us to reconsider the purposes of science and of thought more generally. But nearer at hand are questions about science and money.
The Chakrabarty decision has prompted discussion about possible corrupting tendencies of the profit motive, not so much for the society at large, but, curiously, for the present practice of biological science, especially in universities. For roughly a quarter-century, biomedical research has flourished, largely funded by the federal government and philanthropic foundations, much of it done in universities. Though ultimately interested in the practical benefits, the government, albeit not without frequent prodding by basic scientists, has wisely and patiently supported many outstanding minds in so-called basic research, largely without regard to its immediate utility. Progress over the past three decades is simply staggering. Though competition is keen, and there are well-known cases of secretive and even unscrupulous behavior, on the whole the field has thrived on free and cooperative exchange of information and materials, including strains of microorganisms.
Now that new discoveries and techniques in cell biology and molecular genetics have brought these fields fully into the industrial area, many are worried that the profit motive will distort, not to say corrupt, scientific practices. Concerns are expressed for the effects on the behavior of scientists, on the balance in fields of research, and on universities. Warnings are heard about an impending restriction of the free flow of information and a rise of secretiveness, deception, and other unsavory conduct, not excluding espionage. Others are concerned that profits will dictate the direction of scientific research, deflecting the scientific mind from going where it will or should. With several universities, under threat of rising costs and dwindling financial support, already established entrepreneurs in genetic technology, and many fine scientists entering industry in a variety of capacities, often retaining their academic tenure, there is argument that such goings-on in principle violate the spirit and will in practice threaten the purpose of the university.
These are serious and complicated questions which cannot be addressed adequately here. On some matters the concerns seem exaggerated. The rise of industrial chemistry and applied physics has not in itself, it seems to me, corrupted basic research in universities, nor led to undue secrecy or unsavory practices. And in any case, such problems as appear are due more to the large amounts of money involved than to patenting (though the two are not unrelated). For the need to protect profitable discoveries through patent should lead not to secrecy, but to publication (though the anticipation of future patent application does lead many into temporary secrecy, and reports are now increasing of biologists who, looking to protect future patents, have become silent and stingy with new information and materials). Once a patent is granted, for a payment of royalties new information, materials, and techniques are potentially more widely sharable. Moreover, the disdain of many academic biologists for the practical applications of their work can only be regarded as hypocritical, especially considering the hopes of, and their promises to, their public patrons. Academic scientists have for years played upon the public’s utilitarian concerns and always promised and even emphasized the probable long-run practical benefits when seeking congressional support to satisfy their own private curiosity. Science, even university science, is, to some extent, a kept woman, and the question sometimes seems to be only who shall keep her and what is her price. Her virtue and her fruitfulness may not suffer further from wedding herself to industry.
But this is no matter for levity; the stakes are very high. There is reason to be concerned about the growth of the academic-industrial complex, but not because industry is corrupt or corrupting or because there is something reprehensible about utility or even money-making. Rather, one is concerned because one knows that universities exist not only to generate useful discoveries and because one suspects that knowledge for the sake of power and utility is not the whole truth about knowledge, that thought at its best—including scientific thought—seeks truth for its own sake. For these reasons, we can ill afford to be indifferent to the fate and character of university science and to the climate for free and fundamental thought. The remarkable record of American scientists in basic discovery in biology is a credit to public support and especially to the university setting, with its great freedom of inquiry and its relative immunity to demands for prompt success or useful results. Here, fundamental thought is frequently stimulated by the collegiality of scholars in diverse areas of inquiry, scholars who are also teachers, somehow still heirs to a great tradition that often gave more than lip-service to the disinterested pursuit of the truth. Professors are often pushed to fundamentals also by their undergraduate students, who are not yet sufficiently “educated” to know that there are some questions one should avoid asking. One wonders how theory will fare if the universities are increasingly drawn to practice. One wonders whether the search for the truth will flourish, should the universities and their scientists try to be increasingly relevant and useful.
Though largely unanticipated by the American Founders, the rise of universities and of science within them has added a new dimension to the original relation between science and politics, a dimension that acquisitive, democratic, and egalitarian regimes very much require. The point was made brilliantly by Tocqueville, in his Democracy in America, in the chapter, “Why the Americans are More Concerned with the Applications than with the Theory of Science”:
The higher sciences or the higher parts of all sciences require meditation above everything else. But nothing is less conducive to meditation than the setup of democratic society. . . . Everyone is on the move, some in quest of power, others of gain. In the midst of this universal tumult, this incessant conflict of jarring interests, this endless chase for wealth, where is one to find the calm for the profound researches of the intellect? How can the mind dwell on any single subject when all around is on the move and when one is himself swept and buffeted along by the whirling current which carries all before it?
Not only is meditation difficult for men in democracies, but they naturally attach little importance to it. . . . In democratic countries when almost everyone is engaged in active life, the darting speed of a quick, superficial mind is at a premium, while slow deep thought is excessively undervalued. . . .
Most of the people in these [democratic] nations are extremely eager in the pursuit of immediate material pleasures and are always discontented with the position they occupy and always free to leave it. They think about nothing but ways of changing their lot and bettering it. For people in this frame of mind every new way of getting wealth more quickly, every machine which lessens work, every means of diminishing the costs of production, every invention which makes pleasures easier or greater, seems the most magnificent accomplishment of the human mind. It is chiefly from this line of approach that democratic peoples come to study sciences, to understand them, and to value them. In aristocratic ages the chief function of science is to give pleasure to the mind, but in democratic ages to the body.
. . . It is easy to see how, in a society organized on these lines, men’s minds are unconsciously led to neglect theory and devote an unparalleled amount of energy to the applications of science. . . .
On the strength of this analysis, Tocqueville gives this advice:
If those who are called on to direct the affairs of nations in our time can clearly and in good time understand these new tendencies which will soon be irresistible, they will see that, granted enlightenment and liberty, people living in a democratic age are quite certain to bring the industrial side of science to perfection anyhow and that henceforth the whole energy of organized society should be directed to the support of higher studies and the fostering of a passion for pure science.
Nowadays the need is to keep men interested in theory. They will look after the practical side of things for themselves. So, instead of perpetually concentrating attention on the minute examination of secondary effects, it is good to distract it therefrom sometimes and lift it to the contemplation of first causes. . . .
We therefore should not console ourselves by thinking that the barbarians are still a long way off. Some people may let the torch be snatched from their hands, but others stamp it out themselves.
I do not wish to exaggerate the dangers to pure science or to universities from the new privilege to patent microorganisms, hybridomas, and products of genetic engineering. Nor, unfortunately, are universities or academic scientists today the embodiment of thoughtfulness and disinterested inquiry that Tocqueville rightly argues we so urgently need. But the climate is not being helped by the eruption among scientists and administrators of what must frankly be called greed, nor is it likely to be improved by the continuing growth of the academic-industrial complex. When the president of Harvard University devotes his entire annual address to his Board of Overseers to the theme of “technology transfer”—the translation of scientific knowledge into useful products and processes—and argues that it must become a central task of the university, one has reason to believe that big winds may soon blow the academy off its present course.
American universities are, for all their faults, precious and precarious institutions. In fact, the present balance within them between the busy and the deliberate, the clever and the wise, the useful and the true, is already tipped so far toward the former that we must be cautious about all further changes that tend to diminish the latter. It should now be evident that my concern for universities and for theory and fundamental thought goes beyond my concern for so-called pure science. The earlier discussion should have made clear the importance of careful and thorough thinking about the relation between science and the American polity and about the implications of our new forays into genetic engineering. Indeed, especially now, when the goal and direction of the scientific project for the mastery of nature seem less clear than ever, and when, despite this confusion about the end, the means are being amassed to affect directly and deliberately all forms of life on the planet, we stand in urgent need of the far-seeking and high-minded reflection about science, ethics, and society which the patent laws, industry, and even such fine institutions as the National Institutes of Health cannot encourage or foster.
But theory is urgent not only because basic research pays dividends in applications, nor even because we need theory to think about whither we are tending. Theory is urgent also because it is in itself elevating and liberating. Thoughtfulness, speculation, genuine inquiry beyond mere problem-solving, philosophical reflection on our condition and our place in the world, in short, liberal learning and liberal education—and not only the advancement of Baconian learning—are necessary for a truly free people. Liberty, secured by the progress of science and useful arts, would be little blessed if our minds become enslaved in and to the process of serving our bodies.
Once again, the task is to restore the balance, to give weight to the weaker side. And once again, it is difficult to see how and by whom the countervailing forces for liberal learning and philosophic reflection are to be generated and supported, especially now when economic troubles aggravate the natural tendency of modern thought to serve utility.
The task is beyond the competence both of our science, not least because of its anti-speculative self-definition, and of our law. No one would say that the practice or encouragement of philosophical reflection is the business of our courts. But, at the same time, it is sad when the Supreme Court, the closest approximation in the American polity to the rule of thoughtful reason, promulgates ill-considered opinions about weighty matters. For in justifying its decisions, the Court functions also as a teacher, helping to form what become our ruling opinions. Indeed, the opinions of the Court are often more important for what they teach than for what they decide. We take one last look at the Chakrabarty case, with a view to the Court as teacher.
What has the Chakrabarty decision accomplished? A rather modest gain for Chakrabarty, a rather sizable boost for the burgeoning hybridoma and genetic-technology industry, but—by means of negative example—a most important lesson, if only we can learn it, about how close we have come in our thinking, if not yet in our practice, to overstepping the sensible limits of the project for mastery and possession of nature. This project makes sense only if we fully understand and accept the limited meanings of “mastery” and “possession” and only if we appreciate the nature of living nature and our place within it. On these deep matters, the Court was here a teacher of shallowness.
Consider first the implicit teaching of our wise men, that a living organism is no more than a composition of matter, no different from the latest perfume or insecticide. What about other living organisms—goldfish, bald eagles, horses? What about human beings? Just compositions of matter? Here arise deep philosophical questions to which the Court has given little thought; but in its eagerness to serve innovation, it has, perhaps unwittingly, become the teacher of philosophical materialism—the view that all forms are but accidents of underlying matter, that matter is what truly is— and therewith, the teacher also of the homogeneity of the given world, and, at least in principle, of the absence of any special dignity in all of living nature, our own included.
A similar teaching is also implicit in the enlargement of the sphere of what may be owned and possessed. By the arguments of the Court, it now seems that anything under the sun made of tangible stuff falls under “composition of matter,” and is therefore patentable, so long as its origin is in human art. Nothing in the Court’s opinion would permit one to argue that the “inventor” of the mule, were the mule to be a new invention, could not claim patentability. If the Chinese succeed in their present attempts with artificial insemination to cross-breed a human being with a chimpanzee, producing the novel and useful “humanzee,” it would be arguably patentable matter—if the Court sticks to its interpretation and Congress does not act. These examples may be farfetched but they serve to illustrate the point: there is something obviously and immediately disquieting about the human ownership of an entire living species, even one brought into being with the partial aid of art.
This bizarre new prospect, that one man could own—albeit for a limited time—an entire species, does indeed invite us to rethink the reasons why we permit ownership of any animals. There is a sense in which the former is but the logical extension of the latter, both instances of the possession and exploitation of living nature for human needs and wants, and this logical extension as limiting case might in fact illuminate problematic aspects of our age-old and familiar practice of domesticating plants and animals. Still, there are significant differences which, though they do not fully explain our repugnance at the notion of owning a species, suggest that our disquiet is not due just to the novelty and audacity of the idea.
If usefulness justifies ownership, it also defines its justifiable limits. Ownership of animals, even of large herds, presupposes the usefulness of each animal to the owner. Even when animals are kept for their beauty or companionship, possession is reasonable only on a human scale, that is, on a scale that permits individual appreciation or relation. We do not endorse possession for the sake of possession: the thought of a man buying up and collecting all the world’s camels or giraffes or horses is repulsive, though nothing in the law prevents it. To own more of living nature than what one needs for one’s own life and livelihood is hard to justify. Even harder is it to justify such monopoly when the sole purpose is to exclude others from similar benefits.
Ownership also carries with it responsibility, not only for the living beings but also to other human beings for what the animals inadvertently do. Indeed, living things, unlike true artifacts, have a life of their own and ways that we cannot simply predict or control:
And if one man’s ox hurt another’s, so that it dieth; then they shall sell the live ox, and divide the price of it; and the dead also they shall divide. Or if it be known that the ox was wont to gore in time past, and its owner hath not kept it in; he shall surely pay ox for ox, and the dead beast shall be his own. (Exodus 21: 35-36)
Can one exercise responsibility for an entire species, especially species that reproduce prodigiously and are hard to confine? If one of Chakrabarty’s bacteria escaped from his laboratory, can he be held responsible for the mischief it causes? If Chakrabarty’s bacteria find their way into an oil well or an oil-storage tank, shall he pay drop for drop? For they were wont to gore in time past and the owner hath not kept them in. And (while thinking about fugitive bacteria) if one of Chakrabarty’s technicians going on vacation inadvertently carries—on his skin or clothing or in his digestive tract—one of the microbes from its laboratory confinement in Illinois to freedom in Missouri where it becomes fruitful and multiplies, must all the billions of progeny be returned to Illinois? Will the Supreme Court, in upholding Chakrabarty’s patent claims of ownership, write a new Dred-Scott decision?
Be this as it may, the implicit teaching about ownership of life in the present Supreme Court decision is indeed problematic. It is one thing to own a mule; it is another to own mule.3 Admittedly, bacteria are far away from mules. But the principles invoked, the reasoning, and the stance toward nature go all the way to mules, and beyond.
What is the principled limit to this beginning extension of the domain of private ownership and dominion over living nature? Is it not clear, if life is a continuum, that there are no visible or clear limits, once we admit living species under the principle of ownership? The principle used in Chakrabarty says that there is nothing in the nature of a being, no, not even in the human patenter himself, that makes him immune to being patented: not what he is, but only the “accident” of his non-man-made origin renders man himself a non-patentable organism. If a genetically engineered organism may be owned because it was genetically engineered, what would we conclude about a genetically altered or engineered human being? To be sure, in general it makes sense to allow people to own what they have made, because they have artfully made it. But to respect art without respect for life is finally self-contradictory. For human art depends on the human artificer, whose inventive mind depends on his living body, not only to sustain it that he might practice its cleverness, but also because the ends of his artfulness emerge from the inner needs and aspirations of his embodied life.
Finally, the exalted and mastering status of human art claims too much and too little for itself. It claims too much because it ignores that art can only put together or alter what natural powers beyond human control will allow. In the present case, our inventor even had nature’s active assistance; for it is not strictly true, as the Court claims, that “his discovery is not nature’s handiwork, but his own.” Chakrabarty did not himself create the new bacterium. Rather, he played the matchmaker for a shotgun wedding and the selector of its progeny, while the living organisms did the work. He mixed together plasmids (carrying genes for metabolizing hydrocarbons) produced by and isolated from certain oil-degrading bacterial species and incubated them with the hardier Pseudomonas species, which bacteria all by themselves incorporated the plasmids. By selecting conditions that would support growth only of the plasmid-containing Pseudomonas hybrid, Chakrabarty obtained “his” novel strain. Though the process was—in many senses—“creative” and “his own,” the novel organism was not his creature.
Even in true compositions of matter, that is, when chemicals are placed together to produce a new mixture or compound, nature is commanded only as she is obeyed. The potentialities of given matter may be exploited, but they cannot be artfully created. The laws of nature permit prediction and control of phenomena, but they too are not of our making and cannot be transgressed. One might say, what Nature’s God keeps asunder, no man can put together. Man’s ability to change nature is, in principle and in practice, always consistent with and limited by nature’s unchanging ground.
Ironically, in its pride, human innovativeness also respects itself too little, because it lacks self-understanding. It fails to appreciate its source in the permanent power of mind, given to human beings but not of their own making. Our inventiveness is not our invention; neither are the truths it discovers.
The Court acknowledges that “Einstein could not patent his celebrated law that E=mc2; nor could Newton have patented the law of gravity.” The reason given is curious: “Such discoveries are manifestations of . . . nature, free to all men and reserved exclusively to none.” The Court fails to appreciate the deeper reason why a truth cannot be patented. Once it is published, it is sharable. To know it is to make it “your own.” But truth is “your own” in a very special way, unlike your other “possessions.” The greatest thinkers have understood that truths are neither private nor property, that they come unbidden to mind, mysteriously, and that insight is neither at one’s disposal nor of one’s own making. Homer, the greatest of the makers, assigns credit to the Muse. Finally, the claim of “intellectual property” is unfounded, even for “inventions.”
In the ever-changing being that is given to living organisms, the two poles of natural permanence—mobile matter and sensitive awareness, culminating in mind—are bound together. In human beings, living nature at last becomes conscious of itself. If we are sober in our practice and mindful in our thought, it is given to us human beings to learn our place in the natural whole and to discover something of its distinctive beauty and mysterious ground. Without such self-knowledge, the project for mastery and possession of nature is a Faustian bargain. Reacquiring a respect for our relatives, the ever-changing living forms, could regain for us a much needed recognition and appreciation of the natural and unchanging source of all change.
1 The Federalist's, explanation and defense of this provision, to which we shall refer again, comprises the following brief paragraph:
The utility of this power will scarcely be questioned. The copyright of authors has been solemnly adjudged in Great Britain to be a right of common law. The right to useful inventions seems with equal reason to belong to the inventors. The public good fully coincides in both cases with the claims of individuals. The States cannot separately make effectual provision for either of the cases, and most of them have anticipated the decision of this point by laws passed at the instance of Congress.
The fourth sentence seems to be a conclusion from the first three. Since the second and third deal with “claims of individuals,” we infer that the first considers “the public good.” It is worth noting that the extension of the common-law teaching on copyright to cover “the right to useful inventions” is treated here as an American innovation, albeit one that can be adjudged “with equal reason.”
2 The court opinion quotes at length from the still authoritative doctrine, formulated in his 1880's textbook of patent law by Albert Henry Walker, the court's own additions to his text being noted by parenthesis:
An important question, relevant to utility in this aspect, may hereafter arise and call for judicial decision. It is perhaps true, for example, that the invention of Colt's revolver was injurious to the morals, and injurious to the health, and injurious to the good order of society. That instrument of death may have been injurious to morals, in tending to tempt and to promote the gratification of private revenge. It may have been injurious to health, in that it is very liable to accidental discharge, and thereby to cause wounds, and even homicide. It may also have been injurious to good order, especially in the newer parts of the country, because it facilitates and increases private warfare among frontiersmen. On the other hand, the revolver, by furnishing a ready means of self-defense, may sometimes have promoted morals and health and good order. By what test is utility to be determined in such cases? Is it to be done by balancing the good functions with the evil functions? Or is everything useful within the meaning of the law, if it is used (or is designed and adapted to be used) to accomplish a good result, though in fact it is oftener used (or is as well, or even better, adapted to be used) to accomplish a bad one? Or is utility negatived by the mere fact that the thing in question is sometimes injurious to morals, or to health, or to good order? The third hypothesis cannot stand, because if it could, it would be fatal to patents for steam engines, dynamos, electric railroads, and indeed many of the noblest inventions of the nineteenth century. The first hypothesis cannot stand, because if it could, it would make the validity of the patents to depend on a question of fact to which it would often be impossible to give a reliable answer. The second, hypothesis is the only one which is consistent with the reason of the case, and with the practical construction which the courts have given to the statutory requirement of utility. (Emphasis added.)
Does the doctrine of utility enunciated in the emphasized passage truly serve the public interest? Are Walker's three options exhaustive? And is not even the first hypothesis, as stated, a plausible principle for at least those cases in which it would not be impossible to give a reliable answer to the balance between benefits and harms to the general welfare?
3 The argument should cause us to reconsider the wisdom of permitting ownership even of plant species, made possible by the plant patent laws of 1930 and 1970.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Must-Reads from Magazine
Exactly one week later, a Star Wars cantina of the American extremist right featuring everyone from David Duke to a white-nationalist Twitter personality named “Baked Alaska” gathered in Charlottesville, Virginia, to protest the removal of a statue honoring the Confederate general Robert E. Lee. A video promoting the gathering railed against “the international Jewish system, the capitalist system, and the forces of globalism.” Amid sporadic street battles between far-right and “antifa” (anti-fascist) activists, a neo-Nazi drove a car into a crowd of peaceful counterprotestors, killing a 32-year-old woman.
Here, in the time span of just seven days, was the dual nature of contemporary American anti-Semitism laid bare. The most glaring difference between these two displays of hate lies not so much in their substance—both adhere to similar conspiracy theories articulating nefarious, world-altering Jewish power—but rather their self-characterization. The animosity expressed toward Jews in Charlottesville was open and unambiguous, with demonstrators proudly confessing their hatred in the familiar language of Nazis and European fascists.
The socialists in Chicago, meanwhile, though calling for a literal second Holocaust on the shores of the Mediterranean, would fervently and indignantly deny they are anti-Semitic. On the contrary, they claim the mantle of “anti-fascism” and insist that this identity naturally makes them allies of the Jewish people. As for those Jews who might oppose their often violent tactics, they are at best bystanders to fascism, at worst collaborators in “white supremacy.”
So, whereas white nationalists explicitly embrace a tribalism that excludes Jews regardless of their skin color, the progressives of the DSA and the broader “woke” community conceive of themselves as universalists—though their universalism is one that conspicuously excludes the national longings of Jews and Jews alone. And whereas the extreme right-wingers are sincere in their anti-Semitism, the socialists who called for the elimination of Israel are just as sincere in their belief that they are not anti-Semitic, even though anti-Semitism is the inevitable consequence of their rhetoric and worldview.
The sheer bluntness of far-right anti-Semitism makes it easier to identify and stigmatize as beyond the pale; individuals like David Duke and the hosts of the “Daily Shoah” podcast make no pretense of residing within the mainstream of American political debate. But the humanist appeals of the far left, whose every libel against the Jewish state is paired with a righteous invocation of “justice” for the Palestinian people, invariably trigger repetitive and esoteric debates over whether this or that article, allusion, allegory, statement, policy, or political initiative is anti-Semitic or just critical of Israel. What this difference in self-definition means is that there is rarely, if ever, any argument about the substantive nature of right-wing anti-Semitism (despicable, reprehensible, wicked, choose your adjective), while the very existence of left-wing anti-Semitism is widely doubted and almost always indignantly denied by those accused of practicing it.T o be sure, these recent manifestations of anti-Semitism occur on the left and right extremes. And statistics tell a rather comforting story about the state of anti-Semitism in America. Since the Anti-Defamation League began tracking it in 1979, anti-Jewish hate crime is at an historic low; indeed, it has been declining since a recent peak of 1,554 incidents in 2006. America, for the most part, remains a very philo-Semitic country, one of the safest, most welcoming countries for Jews on earth. A recent Pew poll found Jews to be the most admired religious group in the United States.1 If American Jews have anything to dread, it’s less anti-Semitism than the loss of Jewish peoplehood through assimilation, that is being “loved to death” by Gentiles.2 Few American Jews can say that anti-Semitism has a seriously deleterious impact on their life, that it has denied them educational or employment opportunities, or that they fear for the physical safety of themselves or their families because of their Jewish identity.
The question is whether the extremes are beginning to move in on the center. In the past year alone, the DSA’s rolls tripled from 8,000 to 25,000 dues-paying members, who have established a conspicuous presence on social media reaching far beyond what their relatively miniscule numbers attest. The DSA has been the subject of widespread media coverage, ranging from the curious to the adulatory. The white supremacists, meanwhile, found themselves understandably heartened by the strange difficulty President Donald Trump had in disavowing them. He claimed, in fact, that there had been some “very fine people” among their ranks. “Thank you President Trump for your honesty & courage to tell the truth about #Charlottesville,” tweeted David Duke, while the white-nationalist Richard Spencer said, “I’m proud of him for speaking the truth.”
Indeed, among the more troubling aspects of our highly troubling political predicament—and one that, from a Jewish perspective, provokes not a small amount of angst—is that so many ideas, individuals, and movements that could once reliably be categorized as “extreme,” in the literal sense of articulating the views of a very small minority, are no longer so easily dismissed. The DSA is part of a much broader revival of the socialist idea in America, as exemplified by the growing readership of journals like Jacobin and Current Affairs, the popularity of the leftist Chapo Trap House podcast, and the insurgent presidential campaign of self-described democratic socialist Bernie Sanders—who, according to a Harvard-Harris poll, is now the most popular politician in the United States. Since 2015, the average age of a DSA member dropped from 64 to 30, and a 2016 Harvard poll found a majority of Millennials do not support capitalism.
Meanwhile, the Republican Party of Donald Trump offers “nativism and culture war wedges without the Reaganomics,” according to Nicholas Grossman, a lecturer in political science at the University of Illinois. A party that was once reliably internationalist and assertive against Russian aggression now supports a president who often preaches isolationism and never has even a mildly critical thing to say about the KGB thug ruling over the world’s largest nuclear arsenal.
Like ripping the bandage off an ugly and oozing wound, Trump’s presidential campaign unleashed a bevy of unpleasant social forces that at the very least have an indirect bearing on Jewish welfare. The most unpleasant of those forces has been the so-called alternative right, or “alt-right,” a highly race-conscious political movement whose adherents are divided on the “JQ” (Jewish Question). Throughout last year’s campaign, Jewish journalists (this author included) were hit with a barrage of luridly anti-Semitic Twitter messages from self-described members of the alt-right. The tamer missives instructed us to leave America for Israel, others superimposed our faces onto the bodies of concentration camp victims.3
I do not believe Donald Trump is himself an anti-Semite, if only because anti-Semitism is mainly a preoccupation—as distinct from a prejudice—and Trump is too narcissistic to indulge any preoccupation other than himself. And there is no evidence to suggest that he subscribes to the anti-Semitic conspiracy theories favored by his alt-right supporters. But his casual resort to populism, nativism, and conspiracy theory creates a narrative environment highly favorable to anti-Semites.
Nativism, of which Trump was an early and active practitioner, is never good for the Jews, no matter how affluent or comfortable they may be and notwithstanding whether they are even the target of its particular wrath. Racial divisions, which by any measure have grown significantly worse in the year since Trump was elected, hurt all Americans, obviously, but they have a distinct impact on Jews, who are left in a precarious position as racial identities calcify. Not only are the newly emboldened white supremacists of the alt-right invariably anti-Semites, but in the increasingly racialist taxonomy of the progressive left—which more and more mainstream liberals are beginning to parrot—Jews are considered possessors of “white privilege” and, thus, members of the class to be divested of its “power” once the revolution comes. In the racially stratified society that both extremes envision, Jews lose out, simultaneously perceived (by the far right) as wily allies and manipulators of ethnic minorities in a plot to mongrelize America and (by the far left) as opportunistic “Zionists” ingratiating themselves with a racist and exploitative “white” establishment that keeps minorities down.T his politics is bad for all Americans, and Jewish Americans in particular. More and more, one sees the racialized language of the American left being applied to the Middle East conflict, wherein Israel (which is, in point of fact, one of the most racially diverse countries in the world) is referred to as a “white supremacist” state no different from that of apartheid South Africa. In a book just published by MIT Press, ornamented with a forward by Cornel West and entitled “Whites, Jews, and Us,” a French-Algerian political activist named Houria Bouteldja asks, “What can we offer white people in exchange for their decline and for the wars that will ensue?” Drawing the Jews into her race war, Bouteldja, according to the book’s jacket copy, “challenges widespread assumptions among the left in the United States and Europe—that anti-Semitism plays any role in Arab–Israeli conflicts, for example, or that philo-Semitism doesn’t in itself embody an oppressive position.” Jew-hatred is virtuous, and appreciation of the Jews is racism.
Few political activists of late have done more to racialize the Arab–Israeli conflict—and, through insidious extension of the American racial hierarchy, designate American Jews as oppressors—than the Brooklyn-born Arab activist Linda Sarsour. An organizer of the Women’s March, Sarsour has seamlessly insinuated herself into a variety of high-profile progressive campaigns, a somewhat incongruent position given her reactionary views on topics like women’s rights in Saudi Arabia. (“10 weeks of PAID maternity leave in Saudi Arabia,” she tweets. “Yes PAID. And ur worrying about women driving. Puts us to shame.”) Sarsour, who is of Palestinian descent, claims that one cannot simultaneously be a feminist and a Zionist, when it is the exact opposite that is true: No genuine believer in female equality can deny the right of Israel to exist. The Jewish state respects the rights of women more than do any of its neighbors. In an April 2017 interview, Sarsour said that she had become a high-school teacher for the purpose of “inspiring young people of color like me.” Just three months earlier, however, in a video for Vox, Sarsour confessed, “When I wasn’t wearing hijab I was just some ordinary white girl from New York City.” The donning of Muslim garb, then, confers a racial caste of “color,” which in turn confers virtue, which in turn confers a claim on political power.
This attempt to describe the Israeli–Arab conflict in American racial vernacular marks Jews as white (a perverse mirror of Nazi biological racism) and thus implicates them as beneficiaries of “structural racism,” “white privilege,” and the whole litany of benefits afforded to white people at birth in the form of—to use Ta-Nehisi Coates’s abstruse phrase—the “glowing amulet” of “whiteness.” “It’s time to admit that Arthur Balfour was a white supremacist and an anti-Semite,” reads the headline of a recent piece in—where else? —the Forward, incriminating Jewish nationalism as uniquely perfidious by dint of the fact that, like most men of his time, a (non-Jewish) British official who endorsed the Zionist idea a century ago held views that would today be considered racist. Reading figures like Bouteldja and Sarsour brings to mind the French philosopher Pascal Bruckner’s observation that “the racialization of the world has to be the most unexpected result of the antidiscrimination battle of the last half-century; it has ensured that the battle continuously re-creates the curse from which it is trying to break free.”
If Jews are white, and if white people—as a group—enjoy tangible and enduring advantages over everyone else, then this racially essentialist rhetoric ends up with Jews accused of abetting white supremacy, if not being white supremacists themselves. This is one of the overlooked ways in which the term “white supremacy” has become devoid of meaning in the age of Donald Trump, with everyone and everything from David Duke to James Comey to the American Civil Liberties Union accused of upholding it. Take the case of Ben Shapiro, the Jewish conservative polemicist. At the start of the school year, Shapiro was scheduled to give a talk at UC Berkeley, his alma matter. In advance, various left-wing groups put out a call for protest in which they labeled Shapiro—an Orthodox Jew—a “fascist thug” and “white supremacist.” An inconvenient fact ignored by Shapiro’s detractors is that, according to the ADL, he was the top target of online abuse from actual white supremacists during the 2016 presidential election. (Berkeley ultimately had to spend $600,000 protecting the event from leftist rioters.)
A more pernicious form of this discourse is practiced by left-wing writers who, insincerely claiming to have the interests of Jews at heart, scold them and their communal organizations for not doing enough in the fight against anti-Semitism. Criticizing Jews for not fully signing up with the “Resistance” (which in form and function is fast becoming the 21st-century version of the interwar Popular Front), they then slyly indict Jews for being complicit in not only their own victimization but that of the entire country at the hands of Donald Trump. The first and foremost practitioner of this bullying and rather artful form of anti-Semitism is Jeet Heer, a Canadian comic-book critic who has achieved some repute on the American left due to his frenetic Twitter activity and availability when the New Republic needed to replace its staff that had quit en masse in 2014. Last year, when Heer came across a video of a Donald Trump supporter chanting “JEW-S-A” at a rally, he declared on Twitter: “We really need to see more comment from official Jewish groups like ADL on way Trump campaign has energized anti-Semitism.”
But of course “Jewish groups” have had plenty to say about the anti-Semitism expressed by some Trump supporters—too much, in the view of their critics. Just two weeks earlier, the ADL had released a report analyzing over 2 million anti-Semitic tweets targeting Jewish journalists over the previous year. This wasn’t the first time the ADL raised its voice against Trump and the alt-right movement he emboldened, nor would it be the last. Indeed, two minutes’ worth of Googling would have shown Heer that his pronouncements about organizational Jewish apathy were wholly without foundation.4
It’s tempting to dismiss Heer’s observation as mere “concern trolling,” a form of Internet discourse characterized by insincere expressions of worry. But what he did was nastier. Immediately presented with evidence for the inaccuracy of his claims, he sneered back with a bit of wisdom from the Jewish sage Hillel the Elder, yet cast as mild threat: “If I am not for myself, who will be for me?” In other words: How can you Jews expect anyone to care about your kind if you don’t sufficiently oppose—as arbitrarily judged by moi, Jeet Heer—Donald Trump?
If this sort of critique were coming from a Jewish donor upset that his preferred organization wasn’t doing enough to combat anti-Semitism, or a Gentile with a proven record of concern for Jewish causes, it wouldn’t have turned the stomach. What made Heer’s interjection revolting is that, to put it mildly, he’s not exactly known for being sympathetic toward the Jewish plight. In 2015, Heer put his name to a petition calling upon an international comic-book festival to drop the Israeli company SodaStream as a co-sponsor because the Jewish state is “built on the mass ethnic cleansing of Palestinian communities and sustained through racism and discrimination.” Heer’s name appeared alongside that of Carlos Latuff, a Brazilian cartoonist who won second place in the Iranian government’s 2006 International Holocaust Cartoon Competition. For his writings on Israel, Heer has been praised as being “very good on the conflict” by none other than Philip Weiss, proprietor of the anti-Semitic hate site Mondoweiss.
In light of this track record, Heer’s newfound concern about anti-Semitism appeared rather dubious. Indeed, the bizarre way in which he expressed this concern—as, ultimately, a critique not of anti-Semitism per se but of the country’s foremost Jewish civil-rights organization—suggests he cares about anti-Semitism insofar as its existence can be used as a weapon to beat his political adversaries. And since the incorrigibly Zionist American Jewish establishment ranks high on that list (just below that of Donald Trump and his supporters), Heer found a way to blame it for anti-Semitism. And what does that tell you? It tells you that—presented with a 16-second video of a man chanting “JEW-S-A” at a Donald Trump rally—Heer’s first impulse was to condemn not the anti-Semite but the Jews.
Heer isn’t the only leftist (or New Republic writer) to assume this rhetorical cudgel. In a piece entitled “The Dismal Failure of Jewish Groups to Confront Trump,” one Stephen Lurie attacked the ADL for advising its members to stay away from the Charlottesville “Unite the Right Rally” and let police handle any provocations from neo-Nazis. “We do not have a Jewish organizational home for the fight against fascism,” he quotes a far-left Jewish activist, who apparently thinks that we live in the Weimar Republic and not a stable democracy in which law-enforcement officers and not the balaclava-wearing thugs of antifa maintain the peace. Like Jewish Communists of yore, Lurie wants to bully Jews into abandoning liberalism for the extreme left, under the pretext that mainstream organizations just won’t cut it in the fight against “white supremacy.” Indeed, Lurie writes, some “Jewish institutions and power players…have defended and enabled white supremacy.” The main group he fingers with this outrageous slander is the Republican Jewish Coalition, the implication being that this explicitly partisan Republican organization’s discrete support for the Republican president “enables white supremacy.”
It is impossible to imagine Heer, Lurie, or other progressive writers similarly taking the NAACP to task for its perceived lack of concern about racism, or castigating the Human Rights Campaign for insufficiently combating homophobia. No, it is only the cowardice of Jews that is condemned—condemned for supposedly ignoring a form of bigotry that, when expressed on the left, these writers themselves ignore or even defend. The logical gymnastics of these two New Republic writers is what happens when, at base, one fundamentally resents Jews: You end up blaming them for anti-Semitism. Blaming Jews for not sufficiently caring enough about anti-Semitism is emotionally the same as claiming that Jews are to blame for anti-Semitism. Both signal an envy and resentment of Jews predicated upon a belief that they have some kind of authority that the claimant doesn’t and therefore needs to undermine.T his past election, one could not help but notice how the media seemingly discovered anti-Semitism when it emanated from the right, and then only when its targets were Jews on the left. It was enough to make one ask where they had been when left-wing anti-Semitism had been a more serious and pervasive problem. From at least 1996 (the year Pat Buchanan made his last serious attempt at securing the GOP presidential nomination) to 2016 (when the Republican presidential nominee did more to earn the support of white supremacists and neo-Nazis than any of his predecessors), anti-Semitism was primarily a preserve of the American left. In that two-decade period—spanning the collapse of the Oslo Accords and rise of the Second Intifada to the rancorous debate over the Iraq War and obsession with “neocons” to the presidency of Barack Obama and the 2015 Iran nuclear deal—anti-Israel attitudes and anti-Semitic conspiracy made unprecedented inroads into respectable precincts of the American academy, the liberal intelligentsia, and the Democratic Party.
The main form that left-wing anti-Semitism takes in the United States today is unhinged obsession with the wrongs, real or perceived, of the state of Israel, and the belief that its Jewish supporters in the United States exercise a nefarious control over the levers of American foreign policy. In this respect, contemporary left-wing anti-Semitism is not altogether different from that of the far right, though it usually lacks the biological component deeming Jews a distinct and inferior race. (Consider the left-wing anti-Semite’s eagerness to identify and promote Jewish “dissidents” who can attest to their co-religionists’ craftiness and deceit.) The unholy synergy of left and right anti-Semitism was recently epitomized by former CIA agent and liberal stalwart Valerie Plame’s hearty endorsement, on Twitter, of an article written for an extreme right-wing website by a fellow former CIA officer named Philip Giraldi: “America’s Jews Are Driving America’s Wars.” Plame eventually apologized for sharing the article with her 50,000 followers, but not before insisting that “many neocon hawks are Jewish” and that “just FYI, I am of Jewish descent.”
The main fora in which left-wing anti-Semitism appears is academia. According to the ADL, anti-Semitic incidents on college campuses doubled from 2014 to 2015, the latest year that data are available. Writing in National Affairs, Ruth Wisse observes that “not since the war in Vietnam has there been a campus crusade as dynamic as the movement of Boycott, Divestment, and Sanctions against Israel.” Every academic year, a seeming surfeit of controversies erupt on campuses across the country involving the harassment of pro-Israel students and organizations, the disruption of events involving Israeli speakers (even ones who identify as left-wing), and blatantly anti-Semitic outbursts by professors and student activists. There was the Oberlin professor of rhetoric, Joy Karega, who posted statements on social media claiming that Israel had created ISIS and had orchestrated the murderous attack on Charlie Hebdo in Paris. There is the Rutgers associate professor of women’s and gender studies, Jasbir Puar, who popularized the ludicrous term “pinkwashing” to defame Israel’s LGBT acceptance as a massive conspiracy to obscure its oppression of Palestinians. Her latest book, The Right to Maim, academically peer-reviewed and published by Duke University Press, attacks Israel for sparing the lives of Palestinian civilians, accusing its military of “shooting to maim rather than to kill” so that it may keep “Palestinian populations as perpetually debilitated, and yet alive, in order to control them.”
One could go on and on about such affronts not only to Jews and supporters of Israel but to common sense, basic justice, and anyone who believes in the prudent use of taxpayer dollars. That several organizations exist solely for the purpose of monitoring anti-Israel and anti-Semitic agitation on American campuses attests to the prolificacy of the problem. But it’s unclear just how reflective these isolated examples of the college experience really are. A 2017 Stanford study purporting to examine the issue interviewed 66 Jewish students at five California campuses noted for “being particularly fertile for anti-Semitism and for having an active presence of student groups critical of Israel and Zionism.” It concluded that “contrary to widely shared impressions, we found a picture of campus life that is neither threatening nor alarmist…students reported feeling comfortable on their campuses, and, more specifically, comfortable as Jews on their campuses.” To the extent that Jewish students do feel pressured, the report attempted to spread the blame around, indicting pro-Israel activists alongside those agitating against it. “[Survey respondents] fear that entering political debate, especially when they feel the social pressures of both Jewish and non-Jewish activist communities, will carry social costs that they are unwilling to bear.”
Yet by its own admission, the report “only engaged students who were either unengaged or minimally engaged in organized Jewish life on their campuses.” Researchers made a study of anti-Semitism, then, by interviewing the Jews least likely to experience it. “Most people don’t really think I’m Jewish because I look very Latina…it doesn’t come up in conversation,” one such student said in an interview. Ultimately, the report revealed more about the attitudes of unengaged (and, thus, uninformed) Jews than about the state of anti-Semitism on college campuses. That may certainly be useful in its own right as a means of understanding how unaffiliated Jews view debates over Israel, but it is not an accurate marker of developments on college campuses more broadly.
A more extensive 2016 Brandeis study of Jewish students at 50 schools found 34 percent agreed at least “somewhat” that their campus has a hostile environment toward Israel. Yet the variation was wide; at some schools, only 3 percent agreed, while at others, 70 percent did. Only 15 percent reported a hostile environment towards Jews. Anti-Semitism was found to be more prevalent at public universities than private ones, with the determinative factor being the presence of a Students for Justice in Palestine chapter on campus. Important context often lost in conversations about campus anti-Semitism, and reassuring to those concerned about it, is that it is simply not the most important issue roiling higher education. “At most schools,” the report found, “fewer than 10 percent of Jewish students listed issues pertaining to either Jews or Israel as among the most pressing on campus.”F or generations, American Jews have depended on anti-Semitism’s remaining within a moral quarantine, a cordon sanitaire, and America has reliably kept this societal virus contained. While there are no major signs that this barricade is breaking down in the immediate future, there are worrying indications on the political horizon.
Surveying the situation at the international level, the declining global position of the United States—both in terms of its hard military and economic power relative to rising challengers and its position as a credible beacon of liberal democratic values—does not portend well for Jews, American or otherwise. American leadership of the free world, has, in addition to ensuring Israel’s security, underwritten the postwar liberal world order. And it is the constituent members of that order, the liberal democratic states, that have served as the best guarantor of the Jews’ life and safety over their 6,000-year history. Were America’s global leadership role to diminish or evaporate, it would not only facilitate the rise of authoritarian states like Iran and terrorist movements such as al-Qaeda, committed to the destruction of Israel and the murder of Jews, but inexorably lead to a worldwide rollback of liberal democracy, an outcome that would inevitably redound to the detriment of Jews.
Domestically, political polarization and the collapse of public trust in every American institution save the military are demolishing what little confidence Americans have left in their system and governing elites, not to mention preparing the ground for some ominous political scenarios. Widely cited survey data reveal that the percentage of American Millennials who believe it “essential” to live in a liberal democracy hovers at just over 25 percent. If Trump is impeached or loses the next election, a good 40 percent of the country will be outraged and susceptible to belief in a stab-in-the-back theory accounting for his defeat. Whom will they blame? Perhaps the “neoconservatives,” who disproportionately make up the ranks of Trump’s harshest critics on the right?
Ultimately, the degree to which anti-Semitism becomes a problem in America hinges on the strength of the antibodies within the country’s communal DNA to protect its pluralistic and liberal values. But even if this resistance to tribalism and the cult of personality is strong, it may not be enough to abate the rise of an intellectual and societal disease that, throughout history, thrives upon economic distress, xenophobia, political uncertainty, ethnic chauvinism, conspiracy theory, and weakening democratic norms.
1 Somewhat paradoxically, according to FBI crime statistics, the majority of religiously based hate crimes target Jews, more than double the amount targeting Muslims. This indicates more the commitment of the country’s relatively small number of hard-core anti-Semites than pervasive anti-Semitism.
4 The ADL has had to maintain a delicate balancing act in the age of Trump, coming under fire by many conservative Jews for a perceived partisan tilt against the right. This makes Heer’s complaint all the more ignorant — and unhelpful.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Review of 'The Once and Future Liberal' By Mark Lilla
Lilla, a professor at Columbia University, tells us that “the story of how a successful liberal politics of solidarity became a failed pseudo-politics of identity is not a simple one.” And about this, he’s right. Lilla quotes from the feminist authors of the 1977 Combahee River Collective Manifesto: “The most profound and potentially most radical politics come directly out of our own identity, as opposed to working to end somebody else’s oppression.” Feminists looked to instantiate the “radical” and electrifying phrase which insisted that “the personal is political.” The phrase, argues Lilla, was generally seen in “a somewhat Marxist fashion to mean that everything that seems personal is in fact political.”
The upshot was fragmentation. White feminists were deemed racist by black feminists—and both were found wanting by lesbians, who also had black and white contingents. “What all these groups wanted,” explains Lilla, “was more than social justice and an end to the [Vietnam] war. They also wanted there to be no space between what they felt inside and what they saw and did in the world.” He goes on: “The more obsessed with personal identity liberals become, the less willing they become to engage in reasoned political debate.” In the end, those on the left came to a realization: “You can win a debate by claiming the greatest degree of victimization and thus the greatest outrage at being subjected to questioning.”
But Lilla’s insights into the emotional underpinnings of political correctness are undercut by an inadequate, almost bizarre sense of history. He appears to be referring to the 1970s when, zigzagging through history, he writes that “no recognition of personal or group identity was coming from the Democratic Party, which at the time was dominated by racist Dixiecrats and white union officials of questionable rectitude.”
What is he talking about? Is Lilla referring to the Democratic Party of Lyndon Johnson, Hubert Humphrey, and George McGovern? Is he referring obliquely to George Wallace? If so, why is Wallace never mentioned? Lilla seems not to know that it was the 1972 McGovern Democratic Convention that introduced minority seating to be set aside for blacks and women.
At only 140 pages, this is a short book. But even so, Lilla could have devoted a few pages to Frankfurt ideologist Herbert Marcuse and his influence on the left. In the 1960s, Marcuse argued that leftists and liberals were entitled to restrain centrist and conservative speech on the grounds that the universities had to act as a counterweight to society at large. But this was not just rhetoric; in the campus disruption of the early 1970s at schools such as Yale, Cornell, and Amherst, Marcuse’s ideals were pushed to the fore.
If Lilla’s argument comes off as flaccid, perhaps that’s because the aim of The Once and Future Liberal is more practical than principled. “The only way” to protect our rights, he tells the reader, “is to elect liberal Democratic governors and state legislators who’ll appoint liberal state attorneys.” According to Lilla, “the paradox of identity liberalism” is that it undercuts “the things it professes to want,” namely political power. He insists, rightly, that politics has to be about persuasion but then contradicts himself in arguing that “politics is about seizing power to defend the truth.” In other words, Lilla wants a better path to total victory.
Given what Lilla, descending into hysteria, describes as “the Republican rage for destruction,” liberals and Democrats have to win elections lest the civil rights of blacks, women, and gays are rolled back. As proof of the ever-looming danger, he notes that when the “crisis of the mid-1970s threatened…the country turned not against corporations and banks, but against liberalism.” Yet he gives no hint of the trail of liberal failures that led to the crisis of the mid-’70s. You’d never know reading Lilla, for example, that the Black Power movement intensified racial hostilities that were then further exacerbated by affirmative action and busing. And you’d have no idea that, at considerable cost, the poverty programs of the Great Society failed to bring poorer African Americans into the economic mainstream. Nor does Lilla deal with the devotion to Keynesianism that produced inflation without economic growth during the Carter presidency.
Despite his discursive ambling through the recent history of American political life, Lilla has a one-word explanation for identity politics: Reaganism. “Identity,” he writes, is “Reaganism for lefties.” What’s crucial in combating Reaganism, he argues, is to concentrate on our “shared political” status as citizens. “Citizenship is a crucial weapon in the battle against Reaganite dogma because it brings home that fact that we are part of a legitimate common enterprise.” But then he asserts that the “American right uses the term citizenship today as a means of exclusion.” The passage might lead the reader to think that Lilla would take up the question of immigration and borders. But he doesn’t, and the closing passages of the book dribble off into characteristic zigzags. Lilla tells us that “Black Lives Matter is a textbook example of how not to build solidarity” but then goes on, without evidence, to assert the accuracy of the Black Lives Matter claim that African-Americans have been singled out for police mistreatment.
It would be nice to argue that The Once and Future Liberal is a near miss, a book that might have had enduring importance if only it went that extra step. But Lilla’s passing insights on the perils of a politically correct identity politics drown in the rhetoric of conventional bromides that fill most of the pages of this disappointing book.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
n Athens several years ago, I had dinner with a man running for the national parliament. I asked him whether he thought he had a shot at winning. He was sure of victory, he told me. “I have hired a very famous political consultant from Washington,” he said. “He is the man who elected Reagan. Expensive. But the best.”
The political genius he then described was a minor political flunky I had met in Washington long ago, a more-or-less anonymous member of the Republican National Committee before he faded from view at the end of Ronald Reagan’s second term. Mutual acquaintances told me he still lived in a nice neighborhood in Northern Virginia, but they never could figure out what the hell he did to earn his money. (This is a recurring mystery throughout the capital.) I had to come to Greece to find the answer.
It is one of the dark arts of Washington, this practice of American political hacks traveling to faraway lands and suckering foreign politicians into paying vast sums for splashy, state-of-the-art, essentially worthless “services.” And it’s perfectly legal. Paul Manafort, who briefly managed Donald Trump’s campaign last summer, was known as a pioneer of the globe-trotting racket. If he hadn’t, as it were, veered out of his gutter into the slightly higher lane of U.S. presidential politics, he likely could have hoovered cash from the patch pockets of clueless clients from Ouagadougou to Zagreb for the rest of his natural life and nobody in Washington would have noticed.
But he veered, and now he and a colleague find themselves indicted by Robert Mueller, the Inspector Javert of the Russian-collusion scandal. When those indictments landed, they instantly set in motion the familiar scramble. Trump fans announced that the indictments were proof that there was no collusion between the Trump campaign and the Russians—or, in the crisp, emphatic phrasing of a tweet by the world’s Number One Trump Fan, Donald Trump: “NO COLLUSION!!!!” The Russian-scandal fetishists in the press corps replied in chorus: It’s still early! Javert required more time, and so will Mueller, and so will they.
A good Washington scandal requires a few essential elements. One is a superabundance of information. From these data points, conspiracy-minded reporters can begin to trace associations, warranted or not, and from the associations, they can infer motives and objectives with which, stretched together, they can limn a full-blown conspiracy theory. The Manafort indictment released a flood of new information, and at once reporters were pawing for nuggets that might eventually form a compelling case for collusion.
They failed to find any because Manafort’s indictment, in essence, involved his efforts to launder his profits from his international political work, not his work for the Trump campaign. Fortunately for the obsessives, another element is required for a good scandal: a colorful cast. The various Clinton scandals brought us Asian money-launderers and ChiCom bankers, along with an entire Faulkner-novel’s worth of bumpkins, sharpies, and backwoods swindlers, plus that intern in the thong. Watergate, the mother lode of Washington scandals, featured a host of implausible characters, from the central-casting villain G. Gordon Liddy to Sam Ervin, a lifelong segregationist and racist who became a hero to liberals everywhere.
Here, at last, is one area where the Russian scandal has begun to show promise. Manafort and his business partner seem too banal to hold the interest of anyone but a scandal obsessive. Beneath the pile of paper Mueller dumped on them, however, another creature could be seen peeking out shyly. This would be the diminutive figure of George Papadopoulos. An unpaid campaign adviser to Trump, Papadopoulos pled guilty to lying to the FBI about the timing of his conversations with Russian agents. He is quickly becoming the stuff of legend.
Papadopoulos is an exemplar of a type long known to American politics. He is the nebbish bedazzled by the big time—achingly ambitious, though lacking the skill, or the cunning, to climb the greasy pole. So he remains at the periphery of the action, ever eager to serve. Papadopoulos’s résumé, for a man under 30, is impressively padded. He said he served as the U.S. representative to the Model United Nations in 2012, though nobody recalls seeing him there. He boasted of a four-year career at the Hudson Institute, though in fact he spent one year there as an unpaid intern and three doing contract research for one of Hudson’s scholars. On his LinkedIn page, he listed himself as a keynote speaker at a Greek American conference in 2008, but in fact he participated only in a panel discussion. The real keynoter was Michael Dukakis.
With this hunger for achievement, real or imagined, Papadopoulos could not let a presidential campaign go by without climbing aboard. In late 2015, he somehow attached himself to Ben Carson’s campaign. He was never paid and lasted four months. His presence went largely unnoticed. “If there was any work product, I never saw it,” Carson’s campaign manager told Time. The deputy campaign manager couldn’t even recall his name. Then suddenly, in April 2016, Papadopoulos appeared on a list of “foreign-policy advisers” to Donald Trump—and, according to Mueller’s court filings, resolved to make his mark by acting as a liaison between Trump’s campaign and the Russian government.
While Mueller tells the story of Papadopoulos’s adventures in the dry, Joe Friday prose of a legal document, it could easily be the script for a Peter Sellers movie from the Cold War era. The young man’s résumé is enough to impress the campaign’s impressionable officials as they scavenge for foreign-policy advisers: “Hey, Corey! This dude was in the Model United Nations!”
Papadopoulus (played by Sellers) sets about his mission. A few weeks after signing on to the campaign, he travels to Europe, where he meets a mysterious “Professor” (Peter Ustinov). “Initially the Professor seemed uninterested in Papadopoulos,” says Mueller’s indictment. A likely story! Yet when Papadopoulus lets drop that he’s an adviser to Trump, the Professor suddenly “appeared to take great interest” in him. They arrange a meeting in London to which the Professor invites a “female Russian national” (Elke Sommer). Without much effort, the femme fatale convinces Papadopoulus that she is Vladimir Putin’s niece. (“I weel tell z’American I em niece of Great Leader! Zat idjut belief ennytink!”) Over the next several months our hero sends many emails to campaign officials and to the Professor, trying to arrange a meeting between them. As far we know from the indictment, nothing came of his mighty efforts.
And there matters lay until January 2017, when the FBI came calling. Agents asked Papadopoulos about his interactions with the Russians. Even though he must have known that hundreds of his emails on the subject would soon be available to the FBI, he lied and told the agents that the contacts had occurred many months before he joined the campaign. History will record Papadopoulos as the man who forgot that emails carry dates on them. After the FBI interview, according to the indictment, he tried to destroy evidence with the same competence he has brought to his other endeavors. He closed his Facebook account, on which several communications with the Russians had taken place. He threw out his old cellphone. (That should do it!) After that, he began wearing a blindfold, on the theory that if he couldn’t see the FBI, the FBI couldn’t see him.
I made that last one up, obviously. For now, the great hope of scandal hobbyists is that Papadopoulus was wearing a wire between the time he secretly pled guilty and the time his plea was made public. This would have allowed him to gather all kinds of incriminating dirt in conversations with former colleagues. And the dirt is there, all right, as the Manafort indictment proves. Unfortunately for our scandal fetishists, so far none of it shows what their hearts most desire: active collusion between Russia and the Trump campaign.
Choose your plan and pay nothing for six Weeks!
An affair to remember
All this changed with the release in 1967 of Arthur Penn’s Bonnie and Clyde and Mike Nichols’s The Graduate. These two films, made in nouveau European style, treated familiar subjects—a pair of Depression-era bank robbers and a college graduate in search of a place in the adult world—in an unmistakably modern manner. Both films were commercial successes that catapulted their makers and stars into the top echelon of what came to be known as “the new Hollywood.”
Bonnie and Clyde inaugurated a new era in which violence on screen simultaneously became bloodier and more aestheticized, and it has had enduring impact as a result. But it was The Graduate that altered the direction of American moviemaking with its specific appeal to younger and hipper moviegoers who had turned their backs on more traditional cinematic fare. When it opened in New York in December, the movie critic Hollis Alpert reported with bemusement that young people were lining up in below-freezing weather to see it, and that they showed no signs of being dismayed by the cold: “It was as though they all knew they were going to see something good, something made for them.”
The Graduate, whose aimless post-collegiate title character is seduced by the glamorous but neurotic wife of his father’s business partner, is part of the common stock of American reference. Now, a half-century later, it has become the subject of a book-length study, Beverly Gray’s Seduced by Mrs. Robinson: How The Graduate Became the Touchstone of a Generation.1 As is so often the case with pop-culture books, Seduced by Mrs. Robinson is almost as much about its self-absorbed Baby Boomer author (“The Graduate taught me to dance to the beat of my own drums”) as its subject. It has the further disadvantage of following in the footsteps of Mark Harris’s magisterial Pictures at a Revolution: Five Movies and the Birth of the New Hollywood (2008), in which the film is placed in the context of Hollywood’s mid-’60s cultural flux. But Gray’s book offers us a chance to revisit this seminal motion picture and consider just why it was that The Graduate spoke to Baby Boomers in a distinctively personal way.T he Graduate began life in 1963 as a novella of the same name by Charles Webb, a California-born writer who saw his book not as a comic novel but as a serious artistic statement about America’s increasingly disaffected youth. It found its way into the hands of a producer named Lawrence Turman who saw The Graduate as an opportunity to make the cinematic equivalent of Salinger’s The Catcher in the Rye. Turman optioned the book, then sent it to Mike Nichols, who in 1963 was still best known for his comic partnership with Elaine May but had just made his directorial debut with the original Broadway production of Barefoot in the Park.
Both men saw that The Graduate posed a problem to anyone seeking to put it on the screen. In Turman’s words, “In the book the character of Benjamin Braddock is sort of a whiny pain in the fanny [whom] you want to shake or spank.” To this end, they turned to Buck Henry, who had co-created the popular TV comedy Get Smart with Mel Brooks, to write a screenplay that would retain much of Webb’s dryly witty dialogue (“I think you’re the most attractive of all my parents’ friends”) while making Benjamin less priggish.
Nichols’s first major act was casting Dustin Hoffman, an obscure New York stage actor pushing 30, for the title role. No one but Nichols seems to have thought him suitable in any way. Not only was Hoffman short and nondescript-looking, but he was unmistakably Jewish, whereas Benjamin is supposedly the scion of a newly monied WASP family from southern California. Nevertheless, Nichols decided he wanted “a short, dark, Jewish, anomalous presence, which is how I experience myself,” in order to underline Benjamin’s alienation from the world of his parents.
Nichols filled the other roles in equally unexpected ways. He hired the Oscar winner Anne Bancroft, only six years Hoffman’s senior, to play the unbalanced temptress who lures Benjamin into her bed, then responds with volcanic rage when he falls in love with her beautiful daughter Elaine. He and Henry also steered clear of on-screen references to the campus protests that had only recently started to convulse America. Instead, he set The Graduate in a timeless upper-middle-class milieu inhabited by people more interested in social climbing than self-actualization—the same milieu from which Benjamin is so alienated that he is reduced to near-speechlessness whenever his family and their friends ask him what he plans to do now that he has graduated.
The film’s only explicit allusion to its cultural moment is the use on the soundtrack of Simon & Garfunkel’s “The Sound of Silence,” the painfully earnest anthem of youthful angst that is for all intents and purposes the theme song of The Graduate. Nevertheless, Henry’s screenplay leaves little doubt that the film was in every way a work of its time and place. As he later explained to Mark Harris, it is a study of “the disaffection of young people for an environment that they don’t seem to be in sync with.…Nobody had made a film specifically about that.”
This aspect of The Graduate is made explicit in a speech by Benjamin that has no direct counterpart in the novel: “It’s like I was playing some kind of game, but the rules don’t make any sense to me. They’re being made up by all the wrong people. I mean, no one makes them up. They seem to make themselves up.”
The Graduate was Nichols’s second film, following his wildly successful movie version of Edward Albee’s Who’s Afraid of Virginia Woolf?. Albee’s play was a snarling critique of the American dream, which he believed to be a snare and a delusion. The Graduate had the same skeptical view of postwar America, but its pessimism was played for laughs. When Benjamin is assured by a businessman in the opening scene that the secret to success in America is “plastics,” we are meant to laugh contemptuously at the smugness of so blinkered a view of life. Moreover, the contempt is as real as the laughter: The Graduate has it both ways. For the same reason, the farcical quality of the climactic scene (in which Benjamin breaks up Elaine’s marriage to a handsome young WASP and carts her off to an unknown fate) is played without musical underscoring, a signal that what Benjamin is doing is really no laughing matter.
The youth-oriented message of The Graduate came through loud and clear to its intended audience, which paid no heed to the mixed reviews from middle-aged reviewers unable to grasp what Nichols and Henry were up to. Not so Roger Ebert, the newly appointed 25-year-old movie critic of the Chicago Sun-Times, who called The Graduate “the funniest American comedy of the year…because it has a point of view. That is to say, it is against something.”
Even more revealing was the response of David Brinkley, then the co-anchor of NBC’s nightly newscast, who dismissed The Graduate as “frantic nonsense” but added that his college-age son and his classmates “liked it because it said about the parents and others what they would have said about us if they had made the movie—that we are self-centered and materialistic, that we are licentious and deeply hypocritical about it, that we try to make them into walking advertisements for our own affluence.”
A year after the release of The Graduate, a film-industry report cited in Pictures at a Revolution revealed that “48 percent of all movie tickets in America were now being sold to filmgoers under the age of 24.” A very high percentage of those tickets were to The Graduate and Bonnie and Clyde. At long last, Hollywood had figured out what the Baby Boomers wanted to see.A nd how does The Graduate look a half-century later? To begin with, it now appears to have been Mike Nichols’s creative “road not taken.” In later years, Nichols became less an auteur than a Hollywood director who thought like a Broadway director, choosing vehicles of solid middlebrow-liberal appeal and serving them faithfully without imposing a strong creative vision of his own. In The Graduate, by contrast, he revealed himself to be powerfully aware of the same European filmmaking trends that shaped Bonnie and Clyde. Within a naturalistic framework, he deployed non-naturalistic “new wave” cinematographic techniques with prodigious assurance—and he was willing to end The Graduate on an ambiguous note instead of wrapping it up neatly and pleasingly, letting the camera linger on the unsure faces of Hoffman and Ross as they ride off into an unsettling future.
It is this ambiguity, coupled with Nichols’s prescient decision not to allow The Graduate to become a literal portrayal of American campus life in the troubled mid-’60s, that has kept the film fresh. But The Graduate is fresh in a very particular way: It is a young person’s movie, the tale of a boy-man terrified by the prospect of growing up to be like his parents. Therein lay the source of its appeal to young audiences. The Graduate showed them what they, too, feared most, and hinted at a possible escape route.
In the words of Beverly Gray, who saw The Graduate when it first came out in 1967: “The Graduate appeared in movie houses just as we young Americans were discovering how badly we wanted to distance ourselves from the world of our parents….That polite young high achiever, those loving but smothering parents, those comfortable but slightly bland surroundings: They combined to form an only slightly exaggerated version of my own cozy West L.A. world.”
Yet to watch The Graduate today—especially if you first saw it when much younger—is also to be struck by the extreme unattractiveness of its central character. Hoffman plays Benjamin not as the comically ineffectual nebbish of Jewish tradition but as a near-catatonic robot who speaks by turns in a flat monotone and a frightened nasal whine. It is impossible to understand why Mrs. Robinson would want to go to bed with such a mousy creature, much less why Elaine would run off with him—an impression that has lately acquired an overlay of retrospective irony in the wake of accusations that Hoffman has sexually harassed female colleagues on more than one occasion. Precisely because Benjamin is so unlikable, it is harder for modern-day viewers to identify with him in the same way as did Gray and her fellow Boomers. To watch a Graduate-influenced film like Noah Baumbach’s Kicking and Screaming (1995), a poignant romantic comedy about a group of Gen-X college graduates who deliberately choose not to get on with their lives, is to see a closely similar dilemma dramatized in an infinitely more “relatable” way, one in which the crippling anxiety of the principal characters is presented as both understandable and pitiable, thus making it funnier.
Be that as it may, The Graduate is a still-vivid snapshot of a turning point in American cultural history. Before Benjamin Braddock, American films typically portrayed men who were not overgrown, smooth-faced children but full-grown adults, sometimes misguided but incontestably mature. After him, permanent immaturity became the default position of Hollywood-style masculinity.
For this reason, it will be interesting to see what the Millennials, so many of whom demand to be shielded from the “triggering” realities of adult life, make of The Graduate if and when they come to view it. I have a feeling that it will speak to a fair number of them far more persuasively than it did to those of us who—unlike Benjamin Braddock—longed when young to climb the high hill of adulthood and see for ourselves what awaited us on the far side.
1 Algonquin, 278 pages
Choose your plan and pay nothing for six Weeks!
“I think that’s best left to states and locales to decide,” DeVos replied. “If the underlying question is . . .”
Murphy interrupted. “You can’t say definitively today that guns shouldn’t be in schools?”
“Well, I will refer back to Senator Enzi and the school that he was talking about in Wapiti, Wyoming, I think probably there, I would imagine that there’s probably a gun in the school to protect from potential grizzlies.”
Murphy continued his line of questioning unfazed. “If President Trump moves forward with his plan to ban gun-free school zones, will you support that proposal?”
“I will support what the president-elect does,” DeVos replied. “But, senator, if the question is around gun violence and the results of that, please know that my heart bleeds and is broken for those families that have lost any individual due to gun violence.”
Because all this happened several million outrage cycles ago, you may have forgotten what happened next. Rather than mention DeVos’s sympathy for the victims of gun violence, or her support for federalism, or even her deference to the president, the media elite fixated on her hypothetical aside about grizzly bears.
“Betsy DeVos Cites Grizzly Bears During Guns-in-Schools Debate,” read the NBC News headline. “Citing grizzlies, education nominee says states should determine school gun policies,” reported CNN. “Sorry, Betsy DeVos,” read a headline at the Atlantic, “Guns Aren’t a Bear Necessity in Schools.”
DeVos never said that they were, of course. Nor did she “cite” the bear threat in any definitive way. What she did was decline the opportunity to make a blanket judgment about guns and schools because, in a continent-spanning nation of more than 300 million people, one standard might not apply to every circumstance.
After all, there might be—there are—cases when guns are necessary for security. Earlier this year, Virginia Governor Terry McAuliffe signed into law a bill authorizing some retired police officers to carry firearms while working as school guards. McAuliffe is a Democrat.
In her answer to Murphy, DeVos referred to a private meeting with Senator Enzi, who had told her of a school in Wyoming that has a fence to keep away grizzly bears. And maybe, she reasoned aloud, the school might have a gun on the premises in case the fence doesn’t work.
As it turns out, the school in Wapiti is gun-free. But we know that only because the Washington Post treated DeVos’s offhand remark as though it were the equivalent of Alexander Butterfield’s revealing the existence of the secret White House tapes. “Betsy DeVos said there’s probably a gun at a Wyoming school to ward off grizzlies,” read the Post headline. “There isn’t.” Oh, snap!
The article, like the one by NBC News, ended with a snarky tweet. The Post quoted user “Adam B.,” who wrote, “‘We need guns in schools because of grizzly bears.’ You know what else stops bears? Doors.” Clever.
And telling. It becomes more difficult every day to distinguish between once-storied journalistic institutions and the jabbering of anonymous egg-avatar Twitter accounts. The eagerness with which the press misinterprets and misconstrues Trump officials is something to behold. The “context” the best and brightest in media are always eager to provide us suddenly goes poof when the opportunity arises to mock, impugn, or castigate the president and his crew. This tendency is especially pronounced when the alleged gaffe fits neatly into a prefabricated media stereotype: that DeVos is unqualified, say, or that Rick Perry is, well, Rick Perry.
On November 2, the secretary of energy appeared at an event sponsored by Axios.com and NBC News. He described a recent trip to Africa:
It’s going to take fossil fuels to push power out to those villages in Africa, where a young girl told me to my face, “One of the reasons that electricity is so important to me is not only because I won’t have to try to read by the light of a fire, and have those fumes literally killing people, but also from the standpoint of sexual assault.” When the lights are on, when you have light, it shines the righteousness, if you will, on those types of acts. So from the standpoint of how you really affect people’s lives, fossil fuels is going to play a role in that.
This heartfelt story of the impact of electrification on rural communities was immediately distorted into a metaphor for Republican ignorance and cruelty.
“Energy Secretary Rick Perry Just Made a Bizarre Claim About Sexual Assault and Fossil Fuels,” read the Buzzfeed headline. “Energy Secretary Rick Perry Says Fossil Fuels Can Prevent Sexual Assault,” read the headline from NBC News. “Rick Perry Says the Best Way to Prevent Rape Is Oil, Glorious Oil,” said the Daily Beast.
“Oh, that Rick Perry,” wrote Gail Collins in a New York Times column. “Whenever the word ‘oil’ is mentioned, Perry responds like a dog on the scent of a hamburger.” You will note that the word “oil” is not mentioned at all in Perry’s remarks.
You will note, too, that what Perry said was entirely commonsensical. While the precise relation between public lighting and public safety is unknown, who can doubt that brightly lit areas feel safer than dark ones—and that, as things stand today, cities and towns are most likely to be powered by fossil fuels? “The value of bright street lights for dispirited gray areas rises from the reassurance they offer to some people who need to go out on the sidewalk, or would like to, but lacking the good light would not do so,” wrote Jane Jacobs in The Death and Life of Great American Cities. “Thus the lights induce these people to contribute their own eyes to the upkeep of the street.” But c’mon, what did Jane Jacobs know?
No member of the Trump administration so rankles the press as the president himself. On the November morning I began this column, I awoke to outrage that President Trump had supposedly violated diplomatic protocol while visiting Japan and its prime minister, Shinzo Abe. “President Trump feeds fish, winds up pouring entire box of food into koi pond,” read the CNN headline. An article on CBSNews.com headlined “Trump empties box of fish food into Japanese koi pond” began: “President Donald Trump’s visit to Japan briefly took a turn from formal to fishy.” A Bloomberg reporter traveling with the president tweeted, “Trump and Abe spooning fish food into a pond. (Toward the end, @potus decided to just dump the whole box in for the fish).”
Except that’s not what Trump “decided.” In fact, Trump had done exactly what Abe had done a few seconds before. That fact was buried in write-ups of the viral video of Trump and the fish. “President Trump was criticized for throwing an entire box of fish food into a koi pond during his visit to Japan,” read a Tweet from the New York Daily News, linking to a report on phony criticism Trump received because of erroneous reporting from outlets like the News.
There’s an endless, circular, Möbius-strip-like quality to all this nonsense. Journalists are so eager to catch the president and his subordinates doing wrong that they routinely traduce the very canons of journalism they are supposed to hold dear. Partisan and personal animus, laziness, cynicism, and the oversharing culture of social media are a toxic mix. The press in 2017 is a lot like those Japanese koi fish: frenzied, overstimulated, and utterly mindless.
Choose your plan and pay nothing for six Weeks!
Review of 'Lessons in Hope' By George Weigel
Standing before the eternal flame, a frail John Paul shed silent tears for 6 million victims, including some of his own childhood friends from Krakow. Then, after reciting verses from Psalm 31, he began: “In this place of memories, the mind and heart and soul feel an extreme need for silence. … Silence, because there are no words strong enough to deplore the terrible tragedy of the Shoah.” Parkinson’s disease strained his voice, but it was clear that the pope’s irrepressible humanity and spiritual strength had once more stood him in good stead.
George Weigel watched the address from NBC’s Jerusalem studios, where he was providing live analysis for the network. As he recalls in Lessons in Hope, his touching and insightful memoir of his time as the pope’s biographer, “Our newsroom felt the impact of those words, spoken with the weight of history bearing down on John Paul and all who heard him: normally a place of bedlam, the newsroom fell completely silent.” The pope, he writes, had “invited the world to look, hard, at the stuff of its redemption.”
Weigel, a senior fellow at the Ethics and Public Policy Center, published his biography of John Paul in two volumes, Witness to Hope (1999) and The End and the Beginning (2010). His new book completes a John Paul triptych, and it paints a more informal, behind-the-scenes portrait. Readers, Catholic and otherwise, will finish the book feeling almost as though they knew the 264th successor of Peter. Lessons in Hope is also full of clerical gossip. Yet Weigel never loses sight of his main purpose: to illuminate the character and mind of the “emblematic figure of the second half of the twentieth century.”
The book’s most important contribution comes in its restatement of John Paul’s profound political thought at a time when it is sorely needed. Throughout, Weigel reminds us of the pope’s defense of the freedom of conscience; his emphasis on culture as the primary engine of history; and his strong support for democracy and the free economy.
When the Soviet Union collapsed, the pope continued to promote these ideas in such encyclicals as Centesimus Annus. The 1991 document reiterated the Church’s opposition to socialist regimes that reduce man to “a molecule within the social organism” and trample his right to earn “a living through his own initiative.” Centesimus Annus also took aim at welfare states for usurping the role of civil society and draining “human energies.” The pope went on to explain the benefits, material and moral, of free enterprise within a democratic, rule-of-law framework.
Yet a libertarian manifesto Centesimus Annus was not. It took note of free societies’ tendency to breed spiritual poverty, materialism, and social incohesion, which in turn could lead to soft totalitarianism. John Paul called on state, civil society, and people of God to supply the “robust public moral culture” (in Weigel’s words) that would curb these excesses and ensure that free-market democracies are ordered to the common good.
When Weigel emerged as America’s preeminent interpreter of John Paul, in the 1980s and ’90s, these ideas were ascendant among Catholic thinkers. In addition to Weigel, proponents included the philosopher Michael Novak and Father Richard John Neuhaus of First Things magazine (both now dead). These were faithful Catholics (in Neuhaus’s case, a relatively late convert) nevertheless at peace with the free society, especially the American model. They had many qualms with secular modernity, to be sure. But with them, there was no question that free societies and markets are preferable to unfree ones.
How things have changed. Today all the energy in those Catholic intellectual circles is generated by writers and thinkers who see modernity as beyond redemption and freedom itself as the problem. For them, the main question is no longer how to correct the free society’s course (by shoring up moral foundations, through evangelization, etc.). That ship has sailed or perhaps sunk, according to this view. The challenges now are to protect the Church against progressivism’s blows and to see beyond the free society as a political horizon.
Certainly the trends that worried John Paul in Centesimus Annus have accelerated since the encyclical was issued. “The claim that agnosticism and skeptical relativism are the philosophy and the basic attitude which correspond to democratic forms of political life” has become even more hegemonic than it was in 1991. “Those who are convinced that they know the truth and firmly adhere to it” increasingly get treated as ideological lepers. And with the weakening of transcendent truths, ideas are “easily manipulated for reasons of power.”
Thus a once-orthodox believer finds himself or herself compelled to proclaim that there is no biological basis to gender; that men can menstruate and become pregnant; that there are dozens of family forms, all as valuable and deserving of recognition as the conjugal union of a man and a woman; and that speaking of the West’s Judeo-Christian patrimony is tantamount to espousing white supremacy. John Paul’s warnings read like a description of the present.
The new illiberal Catholics—a label many of these thinkers embrace—argue that these developments aren’t a distortion of the idea of the free society but represent its very essence. This is a mistake. Basic to the free society is the freedom of conscience, a principle enshrined in democratic constitutions across the West and, I might add, in the Catholic Church’s post–Vatican II magisterium. Under John Paul, religious liberty became Rome’s watchword in the fight against Communist totalitarianism, and today it is the Church’s best weapon against the encroachments of secular progressivism. The battle is far from lost, moreover. There is pushback in the courts, at the ballot box, and online. Sometimes it takes demagogic forms that should discomfit people of faith. Then again, there is a reason such pushback is called “reaction.”
A bigger challenge for Catholics prepared to part ways with the free society as an ideal is this: What should Christian politics stand for in the 21st century? Setting aside dreams of reuniting throne and altar and similar nostalgia, the most cogent answer offered by Catholic illiberalism is that the Church should be agnostic with respect to regimes. As Harvard’s Adrian Vermeule has recently written, Christians should be ready to jettison all “ultimate allegiances,” including to the Constitution, while allying with any party or regime when necessary.
What at first glance looks like an uncompromising Christian politics—cunning, tactical, and committed to nothing but the interests of the Church—is actually a rather passive vision. For a Christianity that is “radically flexible” in politics is one that doesn’t transform modernity from within. In practice, it could easily look like the Vatican Ostpolitik diplomacy that sought to appease Moscow before John Paul was elected.
Karol Wojtya discarded Ostpolitik as soon as he took the Petrine office. Instead, he preached freedom and democracy—and meant it. Already as archbishop of Krakow under Communism, he had created free spaces where religious and nonreligious dissidents could engage in dialogue. As pope, he expressed genuine admiration for the classically liberal and decidedly secular Vaclav Havel. He hailed the U.S. Constitution as the source of “ordered freedom.” And when, in 1987, the Chilean dictator Augusto Pinochet asked him why he kept fussing about democracy, seeing as “one system of government is as good as another,” the pope responded: No, “the people have a right to their liberties, even if they make mistakes in exercising them.”
The most heroic and politically effective Christian figure of the 20th century, in other words, didn’t follow the path of radical flexibility. His Polish experience had taught him that there are differences between regimes—that some are bound to uphold conscience and human dignity, even if they sometimes fall short of these commitments, while others trample rights by design. The very worst of the latter kind could even whisk one’s boyhood friends away to extermination camps. There could be no radical Christian flexibility after the Holocaust.