Eugenics—that is, the movement to improve and even perfect the human species by technological means—arose in the late 19th century and flourished in this country and in Europe until the 1930’s. Then it was challenged by scientific counterevidence, and by growing uneasiness about its racialist implications. Later, or so the story was told, eugenics was definitively discredited by the Third Reich which enlisted its doctrines and practices in support of unspeakable crimes against humanity. But now, in the journals and in the textbooks, the story is being told differently. The problem, it is said, was not so much with eugenics itself but with the Nazis: they abused eugenics, they went too far, they were extremists.
Thus in the longer view of history, the horror of the Third Reich may have effected but a momentary pause in the theory and practice of eugenics. For today, four decades later, eugenics is back, and it gives every appearance of returning with a vengeance in the form of developments ranging from the adventuresome to the bizarre to the ghoulish: the manufacture of synthetic children, the fabrication of families, artificial sex, and new ways of using and terminating undesired human life.
To be sure, the literature on all sides of the current disputes about these developments remains riddled with references to the Nazi experience. But the mention of troubling similarities to the Third Reich is, as it should be, accompanied and qualified by other observations. No responsible parties suggest that America is, or is likely to become, Nazi Germany. That is patently absurd. What happens here is and will be distinctively American. And, because this is America, there are political, legal, and moral resources to resist scenarios of the worse inevitably coming to the worst.
In addition, the great majority of today’s eugenists take pains to distance themselves from any hint of racialism, although some very respectable proponents of “population control” are not averse to writing about “inferior” population groups. Further, it must be acknowledged that there have in fact been very impressive technological advances, some of which are indeed breakthroughs to uncharted regions of control over the human condition—and some of which hold high promise for reducing misery and enhancing life. There is no room in this discussion for Luddite reactionaries who claim to discern in every technological change the visage of the “brave new world.” Finally, those who take a favorable view of the developments in dispute would seem to be motivated by the best of intentions. With few exceptions, their language is the winsome one of progress, of reason, and, above all, of compassion.
All that said, we are nonetheless witnessing the return of eugenics. And with it have come questions, inescapably moral questions, for which we appear to have no good answers. Indeed, it is doubtful that we still have a sufficiently shared moral vocabulary even for debating what good answers might be. Yet whether we like it or not, these questions are already being answered from one end of the life cycle to the other in terms that, were one not wary of alarmism, might be described as alarming.
To begin, literally, at the beginning, consider first the putting together of human gametes (sperm or egg) in order to facilitate new ways of having babies, and to produce babies of higher quality.
It is hard not to sympathize with couples who want children but cannot have them because of infertility. What are they to do? Of course there is adoption, but the “right kind” of child is hard to find, and getting one can be very expensive. (One and a half million abortions per year has put a considerable dent in the supply side of the American adoption market, plus adoption is one area where discrimination on the basis of race or handicap is still eminently respectable.) In addition, many people want a child that is, at least in part, produced from their own biological raw materials. Techniques for meeting this market demand are several.
Artificial insemination by husband (AIH) has been with us for some time and is relatively straightforward. Artificial insemination by donor (AID) is technically identical but introduces a third party to the relationship, or, more accurately, biologically excludes the husband. These techniques are really quite simple and lend themselves to do-it-yourself procedures—with a little help from your friends.
A recently publicized example was the case of an Episcopalian priest who wanted a baby but definitely not a husband. She invited three friends over (two of them priests) to masturbate for her, and she then impregnated herself with the mixture of their sperm. The purpose of having several sperm sources, she explained on national television, was to avoid knowing who the father was, and thus to make sure that the child would have an intimate bond to no one but herself. The child is now three years old and the mother has declared that she intends to have another baby by the same procedure. The Washington Post described her as the first artificially inseminated priest in history, which is probably true. Her bishop, Paul Moore of New York, appeared with her on television and gave his unqualified blessing to this undertaking, citing the need for the church to come to terms with the modern world.
In vitro fertilization (IVF) is yet another procedure. It involves the woman being given hormones to stimulate egg production. The eggs are then “harvested” and mixed with sperm from the husband or someone else, and some eggs that become fertilized are placed in the woman’s uterus. But ethical questions have been raised about the use and disposal of the many “superfluous” embryos that are not transferred to the uterus. Some practitioners of this technique—confronted with the argument that anything with the potential of becoming human life is human life—resolve the problem to their moral satisfaction by declaring that very little embryos are “pre-embryos.” On the other hand, there is intense interest—on the part of drug companies and genetic researchers—in letting such embryos develop in the laboratory so that they can be used for scientific experiments.
An additional problem is “superfluous” fetuses. Because the technique is time-consuming and expensive, and because success is by no means assured, “extra” embryos are placed in the uterus in the hope that at least one will “take.” With disturbing frequency, this results in two or three or more very healthy fetuses. When the procedure produces more fetuses than the mother wants or, in some cases, than she can safely carry to term, the practice is to use an ultrasound probe to guide a needle which punctures the hearts of the fetuses to be eliminated. Doctors who do this allow that there may be a moral problem in terminating fetuses that they helped bring into being. But then, it is observed, the morality of the thing is really not that different from the elimination or experimental use of unwanted embryos.
Yet another option for the infertile couple is to elect embryo transfer. Here the husband, or someone else, contributes the sperm with which a “donor” woman is inseminated. The resulting embryo is then “washed” from her uterus and placed in the infertile wife. This procedure is still somewhat experimental and poses high risks to both the donor woman and the embryo, but we are assured that progress is being made in ironing out the difficulties.
An additional option, surrogate motherhood, was the center of a media storm in 1986 in connection with the Baby M case in New Jersey. The terminology is misleading, since the woman is not a surrogate or substitute but is in fact the mother. The procedure might better be called “term” or “contract” motherhood, for she contracts to act as mother only until the child is brought to term. More than a dozen states are now considering measures to legitimate contracts for such rent-a-womb arrangements, but the debate over the practice has deeply divided feminists, and the Left generally. The anxiety is over the using of women, typically women who are vulnerable and in need of the money. There is also objection to what is, after all, a particularly gross capitalist act, even if between consenting adults.
Writing in the Nation, Katha Pollitt further complains that the man in these cases wants “a perfect baby with his genes and a medically vetted mother who would get out of his life forever immediately after giving birth.” (Some contracts also stipulate that the mother will abort the child if there is evidence that it is not up to standard.) Miss Pollitt observes that no other class of father—natural, step, or adoptive—can lay down such conditions. While making some predictably contemptuous remarks about the Vatican’s position on reproduction, she does side with Rome about one thing. “You don’t have a right to a child, any more than you have a right to a spouse. You only have the right to try to have one. Goods can be distributed according to ability to pay or need. People can’t. It’s really that simple.”
But Katha Pollitt and others of like mind do not appreciate the reach of the eugenic vision, which is to eliminate the limits and risks in what was once deemed to be natural. In any event, contract motherhood is but a very small part of the transformations now under way. In ten years the procurers in that business have been able to sign up only five hundred women. It is an enterprise that fades in comparison with the real growth areas in the synthetic-child business.
In the past, a distinction was drawn between positive and negative eugenics (though in usage the terms were sometimes reversed). Positive eugenics was thought to be relatively innocent, simply a matter of breeding good human stock in order to improve the race by increasing the number of the physically and mentally fit. Negative eugenics, on the other hand, made a lot of people nervous, since it meant preventing the birth of the unfit or eliminating them after they were born. In America in the 1920’s more than half the states had mandatory sterilization laws applicable to people who fell into various categories of unfitness. (The laws were enforced mainly in California.) And of course, as the textbooks say, Nazi Germany took both positive and negative eugenics altogether too far.
The distinction between positive and negative eugenics is no longer always helpful. For instance, intervention to eliminate a defective gene rather than to eliminate a defective fetus may be viewed as either positive or negative. Still, some of the more striking changes today are in the area of the positive improvement of the human stock. Indeed, what is now being done and proposed makes earlier efforts at improving the race (for example, the socially and morally clumsy Lebensborn program for breeding the SS elite with superior Reich female stock) seem pitiably primitive.
At present, research focuses on detecting and remedying genetic ills or ailments by removing or adding genes. But discontents with the human condition as it is now constituted are almost infinitely expansive, and since it is almost impossible to argue against the proposition that the quality of human beings we have been turning out to date leaves much to be desired, the pressure to move the limits of intervention may be near irresistible. Is asthma a genetic disease? If asthma, why not baldness, or shortness, or having the “wrong” color eyes? And surely we still focus on diseases only because we have this ancient idea that medical interventions should be therapeutic. Instead of restricting ourselves to curing diseases, however broadly defined, why not be more positive and aim at the desiderata of human life? The combination of reproduction technology and engineering, or either one by itself, may be able to assure the production of socially desired personality types. In that event, presumably, “society” will decide which types are desired.
The enzymes that slice DNA produce nucleotide bases that scientists call “sticky ends” because they merge so easily with the genetic structure of another organism. Many are troubled precisely by the sticky ends to which this technology is being put. But there are some who are not troubled.
Lloyd McAulay, a New York patent lawyer, has written, “I understand the fear that we may be letting the genie out of the bottle as we expand our ability to alter biological evolution. I do not share that fear.” He allows that there should be some control over developments “until we learn a bit more about where we are going.” But, all in all, there is no cause for anxiety since what is happening is not really so new. “Switching genes around strikes me as little more than expedited breeding,” he writes. “As an ethical issue, whether or not we wish to do that with human beings may not be much different from whether or not we wish to breed human beings.” “Expedited breeding”—it is a reassuring phrase.
Another comforting voice over the years has been the editorial page of the New York Times, although, to be sure, the comfort is attended by stylistic rumblings of deeply pondered concern. Whether the subject is genetic engineering or experimenting with human embryos, the Times typically informs us that it is too late to raise the kinds of questions we wanted to ask. For example: “Critics are concerned that making life forms patentable will give animal and eventually human life too much in common with commodities, leading to disrespect for both. But society has already passed that point.” On the Times editorial page, the big decisions are made by society, and society is forever busily bustling along. The editors simply report their sightings of it as it passes one point after another.
The Times acknowledges that genetic engineering is “at first sight disquieting,” but the editorialist has taken a second look and concludes that “It’s hard to object to improving a species’ inherent characteristics.” As to problems we may have with engineering that does change the “inherent characteristics” of a species, we are told that “Such conundrums still lie in the realm of science fiction.” They may be as much as ten or twenty years off and, as John Maynard Keynes suggested, in the long run we are all dead.
One cannot help being struck by the blithe assumption that we can still agree on “the inherent characteristics of a species”—of the human species, for instance. For we have, after all, been through a systematic assault upon the idea that there is anything “inherent” or “natural” in the makeup or behavior of human beings. With respect to sexual identity and behavior, gender relations, familial bondings, and a host of other questions, the human condition is declared to be boundlessly various and malleable. In all these areas, the protester’s appeals to what is natural are dismissed with enlightened contempt. But now, as we intervene to restructure human beings genetically, technicians and their apologists assume the tone of Thomistic philosophers explaining the self-evident truths of natural law, assuring us that they recognize and will respect what is natural, inherent, and essential to being human.
This is, at best, an instance of what Allan Bloom calls debonaire nihilism. More likely, it is a desperate effort to conceal from others, and from themselves, the consequences of what they do; of what they cannot bring themselves not to do—because it is possible, because it is progress, because the adventure of doing the thing that could never before be done is near to irresistible.
One such thing is the use of fetal tissue. Fetal brain, pancreas, and live tissue is, it is said, admirably suited for the treatment of Parkinson’s disease, Alzheimer’s disease, Huntington’s chorea, spinal-cord injuries, and leukemia. Fetal tissue is also excellent for implant treatments because it grows faster, is more adaptable, and causes less immunological rejection than adult tissue. Whatever one thinks of abortion, it is argued, it is a shame to let the material go to waste. There are literally millions of people who might benefit from these human parts. Dr. Abraham Lieberman of New York University Medical Center says of these developments, “This is to medicine what superconductivity is to physics.”
Admittedly, there are concerns about collusion between abortionists and physicians, about how to decide whether the fetus is actually dead, about commercial trafficking in fetal parts, and about women becoming pregnant in order to produce fetal parts to order. Those are only some of the concerns that have been raised. But the decision to move ahead on this front is, we are told, another point that society has passed. As the director of the American Parkinson Association observes, “The majority of people with the disease couldn’t care less about the ethical questions—they just want something that works.”
Both pro- and anti-abortion groups have expressed uneasiness about the use of fetal parts. Pro-choice groups worry that, as with contract motherhood, it could invite the exploitation of women’s bodies to produce custom parts, as it were. Pro-life groups worry that it could make abortion seem more attractive to some women because the parts would be used to help other people. “The worst possible ethical evil of all this,” says Arthur Caplan of the Center for Biomedical Ethics at the University of Minnesota, “would be to create lives simply to end them and take the parts.”
Unencumbered by the delicately nuanced inhibitions of some ethicists, however, the general media are generally enthusiastic. Newsweek, for instance, allows that some controls are necessary “to keep fetal research from becoming barbarous.” But that does not blunt Newsweek’s keenness on the new technology which “has created a surge of interest in fetal-tissue implantation, and research both here and abroad is beginning to offer an exciting glimpse at treatments that could lie ahead.”
Recently a California woman asked a medical ethicist whether she could be artificially inseminated with sperm from her father, who has Alzheimer’s disease. She intended to abort the fetus so that the brain tissue could be transplanted into her father’s brain. The usual response to such questions is that this entire field is still in its infancy, so to speak, and clearer guidelines are yet to be developed. But the California woman’s act of love for her father would no doubt meet with overwhelming support on the Phil Donahue show and similar popular seminars in contemporary ethics.
To more thoughtful students of these matters, the use of dead fetuses leads to some surprising confusions. Britain’s Warnock Committee, for example, recommends that there be a 14-day-cutoff rule for experimentation on embryos that are fertilized in the laboratory. Charles Krauthammer, writing in the New Republic, basically agrees, while acknowledging that the 14-day rule may prepare society for 14 weeks or 40 weeks. “Does any such rule not place us on a slippery slope?” he asks. “The answer is that society already lives there. In fact, it has slid far beyond the 14-day period. In most English-speaking jurisdictions, one can do with an aborted fetus that is many weeks old pretty much what one wants: discard, research, implant. The 14-day rule moves us further up the slope from where we are today.” Krauthammer is surely right about the slippery slope.1
The oddity of the Warnock recommendation, however, is that its concern for the dignity of human life results in greater respect being shown for those who are fertilized in vitro (in the laboratory dish) than for those fertilized in vivo (in the body). Krauthammer suggests, delicately, that we cannot think clearly about the new questions related to the production and exploitation of human life without rethinking the old question of abortion. Because they view the abortion debate as wearied and wearying, as polarized and stalemated, many will resist that suggestion.
In any event, it can be argued that the eugenics project—in both what is proposed and what is already being done—has moved beyond disputes about life before birth, or about life that was never intended to be born. Once again society has passed that point. The new and more “interesting” questions have to do with the termination and medical exploitation of human beings already born. In September 1987, the Newsweek story was titled “Should Medicine Use the Unborn?” Having answered that question affirmatively, in December, three months later, the question agitating the media was “Should Medicine Use the Born?” The opening wedge to this new phase was the debate over what might be done with anencephalic newborns (babies born with most of their brains missing).
As with almost all the questions considered here, New Jersey is vying, successfully, for the honor of being in the legal vanguard. Assembly Bill 3367 would permit parents to donate the organs of an anencephalic child. At present, they have to wait for the child to die first. The new law removes that technicality by declaring the baby dead before it dies. In California, however—confident that the law would quickly catch up with practice, and ethics with the law—they did not wait for a change in law.
Loma Linda University Medical Center is connected with the Seventh Day Adventists, a highly moral, even moralistic, religious group that insists on abstinence from alcohol, tobacco, and—although not universally—tea and coffee. Everything done at the medical center is subjected to the strictest ethical scrutiny. The center, like hospitals most everywhere these days, has a highly qualified ethics committee. Indeed it has been observed, correctly, that in the last two decades medical technology has been the salvation of ethics as a profession. Thousands of medical ethicists and bioethicists, as they are called, professionally guide the unthinkable on its passage through the debatable on its way to becoming the justifiable until it is finally established as the unexceptionable. Those who pause too long to ponder troubling questions along the way are likely to be told that “the profession has already passed that point.” In truth, the profession is usually huffing and puffing to catch up with what is already being done without its moral blessing.
The star of Loma Linda is a surgeon named Leonard Bailey. On October 16, 1987, Dr. Bailey led a team that transplanted a heart from an anencephalic infant into another baby delivered by Caesarean section three hours earlier. A statement from Loma Linda notes that the procedure was “innovative medically” and also “interesting ethically” because it “prompted further discussions regarding the moral wisdom of using brain dead or non-brain dead anencephalic human neonates as organ donors.” (Neonate is the term for children less than a month old.) The girl baby who “donated” her organs was, interestingly enough, named Gabrielle, the feminine of Gabriel, the archangel who reveals the things to come.
There are about 3,500 anencephalic children born each year in this country, and most of them die within a month. The problem is that their organs deteriorate and are not much good for transplanting if you wait until they die. Loma Linda recommends to parents that the children be allowed to live for no more than a week before taking the parts. The parents, we are told, find this procedure “deeply meaningful,” since their disappointment in having a handicapped baby is “redeemed” by putting the baby to good use in “helping others.” The language of redemptive suffering is very prominent in the discussion of these matters. The sorrow of being afflicted with handicapped children or older people with severe disabilities, we are informed, is significantly assuaged by “donating” them for altruistic purposes.
Dr. Jacquelyn Bamman, a neonatologist, is among those who are troubled about what is now being done and proposed, well knowing that today’s somewhat speculative proposal may be next week’s fait accompli. Dr. Bamman worries about the clear departure from traditional medical ethics by doing surgery that is not intended to benefit the child, and indeed is directly aimed at causing the death of the child by removing the heart. She notes “the lack of any rational way to prevent the extension of this same approach to involve other children with serious defects.” If the prospect of a limited lifespan justifies the killing of children in order to use their organs, the issue goes far beyond the anencephalic to include children with Tay-Sachs, Werdnig-Hoffman, and other diseases. It might be argued that in these cases, unlike the anencephalic, there would be “benefit” to the children since it would relieve them of pain (it being assumed, although no one can know for sure, that the anencephalic feel no pain). If all whose brains are severely abnormal are potential “donors,” Dr. Bamman observes, the field is opened to infants with hydranencephaly, grade IV intracranial hemorrhage, Trisomy 13 and 18, and a host of other handicaps.
Responding to Dr. Bamman on behalf of the Loma Linda ethics committee, Dr. O. Ward Swarner acknowledges that she has indeed raised some interesting questions. He assures her that at the present time “there are no intentions or justifications for putting some in jeopardy to harvest organs for others.” At the same time, the ethics committee is in constant “consultation with other concerned staff members, nurses, social workers, ministers, and ethicists” and will “follow with interest” the work of other experts in this rapidly developing field. Dr. Swarner firmly states that “the ethics committee has not approved any harvesting of organs or procurement of transplants in any other than brain-dead patients.”
But, of course, tomorrow is another day. Speaking of a mother who agreed to have her baby’s organs harvested for others, Dr. Joyce Peabody, chief of neonatology at Loma Linda, said, “She has made a major contribution by getting us brave enough to face this issue head on.” We can be confident that the brave surgeons and ethicists of Loma Linda will not flinch in the face of the next “technological breakthrough.” Nor is it likely that other institutions will long allow the stars of a few institutions such as Loma Linda to dominate the firmament of the bright new world now in sight.
To be sure, there are those who warn against the seductive appearance of the brave and the bright. The late Paul Ramsey is sometimes called the father of contemporary medical ethics, and he had reason to rue much of what he helped to wreak. Testifying before a government committee on medical ethics three years ago, Ramsey said, “I respectfully express the hope that the committee will be initially prepared to say ‘Never’ to a number of things that are now being done or proposed and that are now proximately possible to be done, and not merely to things that may be only remotely possible. Remote possibilities are soon proximate, and soon done.”
But what about just saying no? It is possible to say it, but much more difficult to make it stick. Even a good reason for saying no makes little impression in a culture that has lost any shared understanding of the good. Pitted against every no is the logic of progress, the ambition of pioneers, and, not to put too fine a point on it, the lust for fame and fortune. Even those who have the nerve to say no almost never say never. Then too, and also hard to resist, is the impulse of compassion—to relieve the suffering of “meaningless” human lives, to contribute to the health and happiness of others.
Actually, organ transplants involving infants are still highly experimental. As of this writing, there have been only nine heart transplants involving newborns, four kidney transplants, and no liver transplants. But technology proceeds apace, and those who say no—never mind never—are politely but firmly informed that medical practice has already passed that point. And, of course, we are not talking only about infants, although for some reason “breakthroughs” in what we give ourselves permission to do to people usually begin with little people, and with the old or very sick.
There is, at all stages of life, an obvious connection between the harvesting of healthy organs and the decision about when someone is dead, or should die. The question of euthanasia is thus an integral part of the progress of the eugenics project.
Of course the dispute over the merits and demerits of euthanasia has been with us for a very long time, going back to the Greeks and Romans, long before people attributed their decisions to the force of technological breakthroughs. But today the discussion is taking interesting turns.
The Dutch, it is generally acknowledged, are a very progressive people. That country’s program of voluntary euthanasia, which is said to account for up to 8,000 deaths per year, has recently received a great deal of attention in the American media. In the last year several television programs have dramatically contrasted American practice with the more advanced and humane approach of the Dutch. A report in the Wall Street Journal declares, “The Netherlands is pioneering in an area that in the coming decade is likely to be a focus of medical, legal, ethical—and intensely emotional—debate in many industrialized countries.” A spokesman for the Royal Dutch Medical Association explains, “What we are seeing now is the result of processes and technology that keep people alive too long, people who are suffering, people you cannot help in any real way.” Daniel Callahan of the Hastings Center has made an intriguing contribution to our language by describing such people as the “biologically tenacious.”
Not everyone, it should be noted, is enamored of the Dutch program. For instance, Dr. Richard Fenigsen of the Willem-Alexander Hospital in the Netherlands cites a number of studies indicating that one problem with the voluntary-euthanasia program is that it is frequently not voluntary. At some of the major hospitals, general practitioners seeking to admit elderly patients are advised to administer lethal injections instead. Involuntary active euthanasia (direct intervention to terminate a patient’s life without the patient’s permission) has not yet been incorporated into law, but there is great judicial leniency. For example, a doctor suspected of killing twenty residents of a senior-citizens’ home pleaded guilty to killing five, was convicted of killing three, and was given a fine.
As might be expected, these developments both reflect and effect changes in popular attitudes. In a recent Dutch opinion poll, 43 percent of the respondents favored involuntary euthanasia for “unconscious persons with little chance of recovery.” On another question, 33 percent had “much understanding” and 44 percent had “some understanding” for those who, out of mercy, kill their parents without their consent. Seventeen percent thought it “probable” that they would ask for involuntary active euthanasia for a demented relative.
The synod of the Reformed Church in Holland, desiring to offer moral guidance on coming to terms with the modern world, is perceived to be quite favorable in its attitude toward involuntary active euthanasia. Dutch of less advanced opinion, on the other hand, claim to have noticed a striking upsurge in the suspicion expressed by the elderly and sick toward doctors, hospitals, and their own families. (A Gallup poll reports that four times as many Americans would donate a relative’s organs as would donate their own. “Trust is at issue here,” commented Arthur Harrell of the American Council on Transplantation. “Some people are concerned that doctors will prematurely declare them brain-dead. Obviously, we try to allay that fear.” Obviously.)
On the Dutch situation, a historical footnote is of interest. The general humanity, indeed heroism, of the Dutch during World War II was made famous by the story of Anne Frank. Less well known is the story of the Dutch medical profession. When in 1941 Artur Seyss-Inquart, the Reich Commissar for the Netherlands, ordered physicians to cooperate by, for instance, concentrating their efforts on rehabilitating people who could be made fit for labor, the doctors of Holland unanimously refused. Seyss-Inquart then threatened to take away their medical licenses unless they cooperated at least to the extent of giving information about their patients to the Occupation authorities. Unanimously, the doctors of Holland responded by handing in their licenses, taking down their shingles, and seeing their patients secretly. They declared that they would not compromise their medical oath, which pledged them to work, solely and always, for the welfare of their patients. Seyss-Inquart persuaded and cajoled, and then he made an example of a hundred doctors whom he arrested and sent to the concentration camps. But all to no avail. The medical profession of Holland remained adamant. The doctors quietly took care of the widows and orphans of their condemned colleagues, but they would not give in. And so we are told that during the entire Occupation not one of the heroic doctors of Holland cooperated in the Nazi programs of slave labor, euthanasia, eugenic experimentation, and nontherapeutic sterilization.
But all that was a long time ago, and the Dutch doctors of today have so far forgotten it that the Committee on Medical Ethics of the European Community, in unanimously rejecting the proposals of the Dutch medical society on euthanasia, has expressed “hope that this strong reaction will induce [our] Dutch colleagues to reconsider their move and return to the happy communion of utmost respect for human life.”
If the Dutch are being urged to return from the abyss, in this country the forward stampede gains momentum, it seems, almost day by day. This spring voters in California may have the opportunity to vote in a referendum being pushed by Americans Against Human Suffering, the political arm of the Hemlock Society, which has been around for some years and claims 26,000 members in 26 chapters nationwide. The Hemlock Society’s motto is “Good Life, Good Death,” and the referendum is promoted under the banner of the “right to die.” “We need a public debate on acts of euthanasia, and California has the best track record in the nation for taking unprecedented action,” says Derek Humphrey, founder of the Hemlock Society. (Mr. Humphrey has written a much acclaimed book on how he provided lethal drugs for his first wife to commit suicide when she had bone cancer.) It is confidently predicted that, even if this referendum fails in California, it will “raise the consciousness” of the nation and open the way to other initiatives.
The referendum, which would legalize active euthanasia and “assisted suicide,” is strongly opposed by the California Medical Association. “The public should realize that what we are talking about here is killing people,” says Catherine Hanson, the association’s legal counsel. “It is absolutely contrary to the entire medical ethic.” Proponents of the referendum counter that, in the light of recent developments, such a statement of absolutes is obsolete. They may well be right.
Certainly there has been in the last several years a rash of books, articles, and television programs promoting the “right to die” and, although it is usually not put this way, the permission, even the obligation, to kill. In such advocacy, the linkage is commonly made among abortion, fetal experimentation and exploitation, infanticide, and suicide. The basic argument advanced is the need for rational and scientific control over the untidiness of the human condition.
Among the prominent writers in this campaign are Jeffrey Lyon, Earl Shelp, Peter Singer, Helga Kuhse, and Robert Weir. Singer, for example, has famously argued that your average pig has more consciousness and therefore more right to protection than fetuses or human beings suffering from severe disabilities. (Other animal-rights advocates have exhibited some ambivalence toward this line of argument, knowing that it is human beings, not pigs, that they need to persuade of the Tightness of their cause.) In the eugenics literature dealing with issues such as infanticide and suicide, champions of progress typically inveigh against the baneful influence of Christianity in perpetuating irrational “taboos.” This would seem to neglect both the proscriptions against homicide in the Jewish tradition and the wondrous flexibility demonstrated by many Christians in accommodating what are thought to be the imperatives of the modern world.
The current eugenics literature is admirably candid about the radicality of what is being proposed. Shelp, for instance, declares that “it is proper to treat unequals unequally,” and warns against “a tyranny of the dependent in which the production of able persons is consumed by the almost limitless needs of dependent beings.” Lyon recognizes that many severely handicapped people succeed in living happy, productive, and even inspirational lives. But such people are aberrations (“dynamic, overachieving supercripples”) and should not be permitted to distract our attention from the need for a rational public policy that must, perforce, deal with the generality.
In his very useful study, The Nazi Doctors, Robert Jay Lifton details the progress of the “medicalization of killing” under the Third Reich. The concept of lebensunwertes Leben (“life not worthy of life”) was used to cut a wide swath, including the unfit newborn, the mentally ill, the gravely handicapped, the useless aged, and, of course, several races that fell into the category of the “subhuman.” It must be acknowledged that, except for tracts issued from the fever swamps surrounding the eugenics project, few people today include a racial factor in calculations of who does and who does not have sufficient “quality of life” to continue living. Here the inhibition against racial discrimination seems to be one “taboo” still firmly in place.
It must be further acknowledged that in the literature there is considerably more moral agonizing about ending the lives of people who have previously been recognized as rational and productive citizens. But in the cases of unfit newborns and human life that is “incapable of full social participation,” the decision to terminate is relatively uncomplicated. A rational quality-of-life measurement makes it clear that their lives are not a good for them. Thus the Nobel Prizewinners Francis Crick and James Watson, co-discoverers of the structure of DNA, think that newborn infants should be subjected to rigorous examination and should be permitted to live only if they are found fit. Many who find the proposal repugnant are sure that there is a convincing argument against it, but it does not come readily to mind.
Critics contend, however, that the question of whether life is a good for the person gets things backward. The argument of the critics is that life is a good of the person, and that depriving the innocent of such life is tantamount to homicide. In current debates that argument is widely dismissed as “vitalism,” which presumably depends upon a metaphysical belief regarding the status of life rather than a rational judgment regarding the quality of the life actually being lived. Admittedly and inevitably, in all cases somebody is making a decision. Such decisions become especially tricky in the instances of involuntary euthanasia or “assisted suicide.”
Despite the avid promotion of death with dignity, living wills, and related ideas, the vast majority of people do not, for whatever reason, clearly indicate in advance the circumstances in which they wish to be killed. This results in numerous instances, especially with respect to the biologically tenacious aged, of “subhuman life” being a heavy burden upon family and the medical staff. At this point the assistance offered in assisted suicide must be generously defined, including the decision to make a decision for people who cannot decide. Ethics committees around the country have helpfully developed quality-of-life indexes by which it is possible to make a “best-interest” judgment, also called a “reasonable-person” judgment. That is to say, others decide to terminate a patient’s life on the basis of what it is assumed the patient would decide were he a reasonable person acting in his own best interest.
This form of “substituted judgment” has led to concepts such as surrogate suicide or substitute suicide, although of course it is always the other person who dies. Perhaps not surprisingly, when the questions are posed in these ways it is usually decided on behalf of the other party that he or she would decide to stop being a burden to the people who are actually making the decision.
In current practice and discussion, there is not yet a consensus in favor of active euthanasia by administering a lethal dosage or otherwise actually killing the person. A consensus is rapidly forming, however, on withholding food and hydration in order to “facilitate the dying process.” This consensus requires the erasure of two distinctions of long standing in medical ethics and practice. The first distinction is between “ordinary” and “extraordinary” means of treatment. It is now widely, although by no means unanimously, agreed that, in the case of certain classes of patients, all treatment is extraordinary, and therefore not required and perhaps not ethically permitted.
The second distinction is between medical treatment and providing food and water. It used to be thought that providing food and water, also intravenously, is a matter of ordinary obligation. The argument is now on the ascendancy, however, that providing food and water constitutes medical treatment. And, again, in specified cases any medical treatment falls into the category of “extraordinary means” which are neither required nor to be countenanced. With the withdrawing of food and water the decision has been made to intervene actively with the clear and sole intent of hastening death. That is to say, the decision has been made for euthanasia or mercy killing. The only question now is how death is to be effected. Starvation is a very clumsy means. The person may live for days, there is often frightful physical disfigurement, and there is the unknown factor of prolonged pain. The attractiveness of starvation to the morally queasy is that it is the “least direct” means of hastening death.
But once we have grown more comfortable with the euthanasia decision that has been made, it seems almost certain that medical practice will adopt means that are more efficient and less aesthetically disturbing. Starvation must thus be seen as a provisional technique to be employed only until medical practice and public opinion are prepared for more rational measures.
It should not be thought that these developments have to do only with the comatose, the “biologically tenacious” drug-sated aged, or others in imminent danger of dying. Those are of course the cases highlighted by euthanasia enthusiasts, for such cases lend themselves to emotionally powerful statements about needlessly prolonging “meaningless” human life, and about the burden that such life is to others. Traditional medical ethics has long allowed the removal of means of sustenance from those near death if the means are counterproductive or ineffectual. In other words, if the feeding instrument is causing other severe disabilities, or the body is not able to assimilate the food, or the person is within hours of dying no matter what is done, intravenous feeding should be discontinued. But what is now being proposed and what is now being done goes much further, including direct intervention to terminate broad categories of people suffering from quality-of-life deficiencies.
The new approach received intense national attention a few years ago in the Baby Doe case in Indiana. There a court allowed parents to starve to death their handicapped baby, even though dozens of couples volunteered to adopt the child. Since then there have been well-publicized cases of adults injured in accidents or suffering from crippling diseases who have been starved to death, although they gave no indication that they wished to die and, at least according to some observers, indicated a will to live. Many questions, of course, have been raised about such cases, most of which are addressed by a recent report from the Hastings Center, “Guidelines on the Termination of Life-Sustaining Treatment and the Care of the Dying.”
The panel that issued this report in September 1987 proposes very broad categories of people for whom medical treatment, including the supply of food and water, might be terminated. One category, for instance, is “the patient who has an illness or disabling condition that is severe and irreversible.” That would seem to offer distinct possibilities for reducing the population of nursing homes, mental institutions, and a good many hospital wards, thus dramatically relieving pressure on scarce medical resources. The panel focuses on people in such categories who “lack decision-making capacity” with respect to whether they wish to live. In these cases a substituted judgment is required and the “reasonable-person” standard should be applied. The standard is put this way: “Would a reasonable person in the patient’s circumstances probably prefer the termination of treatment because the patient’s life is largely devoid of opportunities to achieve satisfaction, or full of pain or suffering with no corresponding benefits?” The panel wants it understood that it is being cautious and is sensitive to possible “abuses” of the approach it recommends. Substituted judgments should be carefully reviewed by several parties, including doctors and ethics committees. After listing the several categories of people who are candidates for termination, the report states, “The above list in no way suggests that treatment should be forgone just because a person falls into one of these categories; nor does it mean that treatment may not be terminated for other patients.” The latter statement, one notes, sharply qualifies and may in some instances nullify the former, despite the former’s being italicized. (Treatment, keep in mind, includes supplying food and water.)
Much depends on what is meant by the person’s “capacity” to make a decision about whether he wishes to die. “These guidelines define decision-making capacity as: (a) the ability to comprehend information relevant to the decision; (b) the ability to deliberate about the choices in accordance with personal values and goals; and (c) the ability to communicate (verbally or nonverbally) with caregivers.”
Any experienced medical “caregiver” will recognize that this constitutes a pretty tall order for many patients. For example, deliberating about choices in accordance with personal values and goals is difficult for many people under the best of circumstances. Yet the panel urges “respect for the patient as a self-determining individual” and cautions against “wresting control from the patient with decision-making capacity.” Capacity, we are told, should not be confused with competence, which is a legal term. “A person can be legally competent and nonetheless lack the capacity to make a particular treatment decision.” Capacity turns out to be a marvelously plastic measure. “Capacity is not an all-or-nothing matter; there is a spectrum of abilities, and capacity can fluctuate over time and in different circumstances.” For instance, “Extreme instability of preference may itself be a form of decision-making incapacity.” The patient who yesterday wanted to die and today just as intensely wants to live clearly does not have the capacity to understand what is in his best interest.
The Hastings Center guidelines, which emerged from a project involving twenty experts over two-and-a-half years, have been widely hailed. The New York Times reported that “experts say no such comprehensive guidelines have been developed before,” and the study “breaks important ground.” A closer look at the panel, however, indicates that the document, contra its publicity, may not reflect such an impressive consensus among experts.
Five of the twenty members of the project, including director Daniel Callahan, are from the staff of the Hastings Center. Of the remainder, there is a strong representation of people interested in medical malpractice law and of others involved in the administration of nursing homes. Without impugning motives, it might be suggested that such people have a vested interest in more relaxed rules for the treatment of people who lack “decision-making capacity.”
In addition, two members of the panel who are ethicists issued substantive written dissents. Leslie Steven Rotenberg of Los Angeles, who has also publicly challenged the Loma Linda proceedings discussed earlier, is quite forthright: “I fear these guidelines, if widely endorsed, may be used to give a moral ‘imprimatur’ to undertreating or failing to treat persons with disabilities, unconscious persons for whom accurate prognoses are not yet obtainable, elderly patients with severe dementia, and others whose treatment is not believed (to use the language of the report) ‘costworthy.’”
Despite all this, the Hastings Center report is celebrated as a landmark document by proponents of the eugenics project, and is now being invoked in public debates, court cases, and state legislatures around the country.
The director of the Hastings Center, Daniel Callahan, is frequently described as the most widely respected authority on medical ethics in America. Be that as it may, he has certainly been at the center of these discussions for almost twenty years and has recently stirred a lively discussion with his book Setting Limits: Medical Goals in an Aging Society.2 Callahan urges us yet once more to brace ourselves for the thinking of the unthinkable. The basic proposal is that there should be an age limit, perhaps eighty-five, beyond which there will be no government funding for life-extending medical care. Because Callahan is a decent and intelligent man, the proposal is almost painfully nuanced and surrounded by myriad qualifications. Indeed, his is a deeply conflicted and often confused argument. Thus, he offers extensive data indicating that America simply cannot afford quality medical care for a rapidly aging population but, at the same time, he insists that his proposal should not be adopted for purely fiscal reasons. Again, he repeatedly says that his proposal would be “dangerous” and “morally mischievous” without major changes in cultural attitudes toward aging and death, and such changes, he says, may take generations. Yet he persists in making his proposal now.
Some of the changes advocated by Callahan are surely to be welcomed. Drawing on the work of Leon Kass of the University of Chicago, he urges our accepting the idea that there is such a thing as “a natural life span.” In this respect Callahan sets himself against the eugenics project with its delusory dream of immortality through technological control. Yet he simultaneously subscribes to a quality-of-life index by which “natural” limits, such as severe disability, are not accepted but taken to be signs of a life not worth living. Callahan is well aware of the Nazi doctrine of lebensunwertes Leben and notes that, in the light of the Nazi experience, “there has been a justifiable reluctance to exclude borderline cases from the human community.” That reluctance can be overcome, however, if we keep it firmly in mind that the Nazis “spoke all too readily of ‘a life not worth living,’” and if we ourselves are very careful when we speak the same way.
Callahan clearly wants to distance himself from the proponents of euthanasia, assisted suicide, and other such measures. But he also argues that “artificial” feeding is a medical treatment and should be discontinued in the case of patients suffering from severe quality-of-life deficiency. Lacking any ethical framework other than liberal individualism, Callahan stresses respect for the patient’s decision, or, as it turns out, those who decide for the patient when the patient is “incapable.” What it comes down to is quite bluntly stated: “At stake is how far and in what ways we are emotionally prepared to go to terminate life for the elderly.”
The sentence is typical of the logic of the eugenics project and interesting in several respects. For instance, it is said that we are terminating life “for” other people, rather than terminating the life “of” other people, it being assumed by the “reasonable-person standard” that we are doing them a favor. As important, we are told that what is at stake is what we are “emotionally prepared” to do. For many people, that is a slight barrier indeed. In this way of thinking, the accent is on freedom, voluntarism, and choice. Nobody is allowed to “impose his values” on others. You are free to decide not to terminate your elderly parent or handicapped child, but you must also agree not to interfere with my decision to “terminate life for” the incapacitated who fall within my decision-making authority. (It is worth noting that the Hastings Center guidelines do include “religious exemptions” for people who have religiously grounded inhibitions about the policies proposed.)
Daniel Callahan is a spirited opponent of the slippery-slope metaphor, insisting that one thing does not necessarily, or even probably, lead to another. But his own emotional preparedness with respect to the treatment of the dependent and incapable has undergone a remarkable development. In the October 1983 issue of the Hastings Center Report he wrote forcefully against withdrawing food and water. “Given the increasingly large pool of superannuated, chronically ill, physically marginal elderly, it could well become the nontreatment of choice.” He added, “Because we have now become sufficiently habituated to the idea of turning off a respirator, we are psychologically prepared to go one step further.” In 1983 Callahan was convinced that “the feeding of the hungry, whether because they are poor or because they are physically unable to feed themselves, is the most fundamental of all human relationships. It is the perfect symbol of the fact that human life is inescapably social and communal. We cannot live at all unless others are prepared to give us food and water when we need them. . . . It is a most dangerous business to tamper with, or adulterate, so enduring and central a moral emotion.” Four years later Callahan invites us, not to tamper with or adulterate, but to discard that moral emotion. It is, after all, but an emotion. One may perhaps be forgiven for thinking that Callahan dramatically illustrates the slippery slope that he so vigorously denies.
To be sure, there is nothing wrong with changing one’s mind, and people like Daniel Callahan may simply say that they have thought things through more carefully. As he himself suggests, however, this is not a matter of thinking one’s way through but of feeling one’s way through. We need no longer think about the unthinkable when, in time, it has become emotionally tolerable, even banal. A useful term in this connection is primicide, the first murder. When it is first suggested that we do a murderous deed, we may respond, “But that would be murder!” After we have done it once, or maybe twice, that response loses something of its force of conviction. As a barrier to evil, novelty is a one-time thing; it cannot be reinstated. In the 1930’s a hit man for Murder Inc. was on trial. The prosecutor asked him how he felt when committing a murder. He in turn asked the prosecutor how he felt when he tried his first case in court, to which the prosecutor allowed that he was nervous, but he got used to it. “It’s the same with murder,” observed the hit man, “you get used to it.”
Champions of the eugenics project are deeply and understandably offended when it is said that they are advocating murder. For some reason they do not take offense when the statement is amended to say that they are advocating what used to be called murder.
The attempt to deny risk and suffering, the use and elimination of the unfit—these were all elements of the old eugenics. But what earlier eugenists could only dream about can now be done; and, if it can be done, it likely will be done. In the technological possibility of creating “a new man in a new society,” we have a vision that makes the similar ambition of political totalitarians seem modest by comparison.
Of course there are serious people worrying about that ominous prospect. But it seems that soaring hubris, joined to technical capacity, has broken the bonds of moral restraint. That the bonds are broken is evident enough in the very efforts designed to impose limits.
Thus not long ago textbooks in ethics used to set forth the moral principle that each person counts for one, and none counts for more or less than one. A standard illustration of the principle was the hypothetical case of a hospital with five patients, four of them persons of world-class accomplishment (a statesman, musician, mathematician, and philosopher), the fifth a mental deficient without means or kin. The fifth does, however, have the healthy organs which, if transplanted, could save the lives of the other four. The point was that it could never be right to kill the one in order to save the four, for people are always to be treated as ends and never as means. It was a venerable principle in the history of Western thought. Today the principle is becoming the hypothesis, and the illustration no longer illustrates anything but a “morally agonizing dilemma” to be gravely faced in consultation with surgeons, social workers, ministers, and ethicists.
Or consider, once again, Britain’s Warnock Committee. Its chairman, Dame Mary Warnock, flatly states, “There is no such thing as a moral expert.” This may suggest that, as a teacher of moral philosophy at Cambridge, Dame Mary is taking her salary under false pretenses, but that is a question for her and her conscience. More immediate to our concern is the assumption that on issues of life and of death, of birth and of the family, “everyone has a right to judge for himself.” This is the perfect formula of what Alasdair MacIntyre calls the ethics of “modern emotivism.” Step by step, the committee states that, since A is allowed, there is no rational reason for disallowing B. It is, as Daniel Callahan might say, a question of emotional preparedness. Of course the committee knows that some matters of life and death must be regulated by law, but law is a weak reed in the absence of moral reasoning. As Dame Mary writes, “We were bound to have recourse to moral sentiment, to try, that is, to sort out what our feelings were, and to justify them.” Most of us, it might be noted, are very good at justifying our feelings.
Studies such as that of the Warnock Committee are not done in a social vacuum. The people involved recognize that they are morally accountable to society and, we are told, “Society feels, albeit obscurely, that its members, especially the most helpless, such as children and the very old, must be protected against possible exploitation by enthusiastic scientists: and embryos are brought into the category of those deserving protection, just as animals are. This is a matter of public, and widely shared, sentiment” (emphasis in original). But the obscure feelings of society are marvelously malleable. So the committee states, “The question must ultimately be . . . in what sort of society can we live with our conscience clear?” That, take note, is the ultimate question.
Dame Mary wants it known that she is not unaware of the dangers in this line of thinking. There is, she says, “an increasing sense of urgency” that social controls “should be brought up to date, so that society may be protected from its real and very proper fear of a rudderless voyage into unknown and threatening seas.”
And so, according to the Warnock Committee, we have embarked upon this parlous voyage guided by public opinion, technological innovation, and obscure moral feelings, headed toward a society in which we can live with our conscience clear. (It is worth noting that eight of the sixteen members of the committee issued dissents of varying substance. Even so, the Warnock report is hailed as a landmark by the champions of the return of eugenics.)
Of a very different order is last year’s document from the Vatican, “An Instruction on Respect for Human Life in Its Origin and on the Dignity of Procreation.” Insisting on the unity of the relational and procreative in human sexuality, the document condemns the new eugenics in no uncertain terms. Procedures such as those countenanced by the Warnock Committee, says the Vatican, are not acceptable. “These interventions are not to be rejected on the grounds that they are artificial” but because they violate the dignity of the human person.
Charles Krauthammer, among others, has treated the Vatican Instruction with respect, acknowledging that it is “intellectually more satisfying” than committee products such as Warnock. But he thinks the Vatican statement is also “far less useful.” He cites the injunction of the Talmud, “Make ye a fence to the law.” A fence prohibits actions that, although not in themselves wrong, open the way to wrong. The problem with the Vatican statement, says Krauthammer, is that it is “a fence too far.” The Vatican, he writes, “sees what hell lies at the bottom of the slippery slope, and rather than erect bulwarks, detours, and sandbags, it declares the entire mountain off-limits.” For Krauthammer, “There is no way off the slope.” “Better,” he asserts, “to find a reasonable way to live on it.”
At best, it seems, we can slow the slide to what Krauthammer calls the “hellish center” at the bottom. Reports such as that of the Warnock Committee recommend detailed ethical examination of every inch of our downward slide, and they would even put some provisional obstacles in the way, but their very logic precludes the erection of any fences at all, whether near or far. More than that, they invite the conclusion that there is no hell that the fit and the flexible could not learn to live in with a “clear conscience.”
When it comes to the elimination of the unfit, Robert Destro, law professor and member of the United States Civil Rights Commission, believes there might be some safety in the legal tradition and in existing laws. “The prejudice against the disabled and those with mental disabilities,” he writes in the Journal of Contemporary Health Law and Policy, “is a strong one, with a long and sordid history.” In recent years, civil-rights law in particular has been significantly extended to include the handicapped. If courts are now to countenance discrimination against the mentally and physically handicapped by permitting guardians to starve their wards, says Destro, “they should do so directly rather than mask their decisions in high-sounding arguments claiming to rely on ‘privacy’ and ‘self-determination.’” In cases where the ward is incompetent, Destro goes on, the only privacy and self-determination being served are those of the guardian, not those of the ward. On a collision course with the entire history of achievements in civil rights, “the law is in the process of adopting a functional definition of the value of the human person, but it is doing so by indirection.” Destro concludes: “Though it may take some time, I do believe that we will live to regret leaving to lawyers, doctors, judges, legislators, and ethicists the important task of deciding who among the disabled shall live, and who shall die. We have been down that road before.”
Writing in 1963, Mark H. Haller, a historian of the American eugenics movement, noted that since the war against the Nazis there were signs of “a renewed interest in eugenic problems, although the word eugenics has seldom been used.” He cited the noted eugenist Frederick Osborn who urged the movement to be patient, waiting for scientific knowledge, technology, and social attitudes to prepare the way for the radical changes required. Otherwise, said Osborn, the movement would make the mistakes it did in the past and would once again “turn public opinion back against eugenics.”
Twenty-five years later it seems the time is right. Perhaps the law, or maybe the remembrance of horrors past, will yet fend off the return of eugenics in its fullness. Perhaps popular moral judgment, drawn from older traditions of moral truth, will, through the democratic process, begin to erect fences. Perhaps our cultural leaders will rediscover modes of moral reason that appeal to a good beyond emotion. And perhaps not.
And so, quite suddenly it seems, we are facing questions for which we have no ready answers. The questions are being answered, however. Most of us, probably because we want to live with a clear conscience, prefer not to think about the answers that are being given. Later, we can say that we did not know.
1 I am sometimes asked whether I “believe in” the slippery slope, as though it requires an act of faith. I believe in the slippery slope the same way I believe in the Hudson River. It's there. There is no better metaphor to describe those cultural and technological skid marks which are evident to all who have eyes to see.
2 Simon & Schuster, 256 pp., $18.95.