he huge cultural authority science has acquired over the past century imposes large duties on every scientist. Scientists have acquired the power to impress and intimidate every time they open their mouths, and it is their responsibility to keep this power in mind no matter what they say or do. Too many have forgotten their obligation to approach with due respect the scholarly, artistic, religious, humanistic work that has always been mankind’s main spiritual support. Scientists are (on average) no more likely to understand this work than the man in the street is to understand quantum physics. But science used to know enough to approach cautiously and admire from outside, and to build its own work on a deep belief in human dignity. No longer.
Today science and the “philosophy of mind”—its thoughtful assistant, which is sometimes smarter than the boss—are threatening Western culture with the exact opposite of humanism. Call it roboticism. Man is the measure of all things, Protagoras said. Today we add, and computers are the measure of all men.
Many scientists are proud of having booted man off his throne at the center of the universe and reduced him to just one more creature—an especially annoying one—in the great intergalactic zoo. That is their right. But when scientists use this locker-room braggadocio to belittle the human viewpoint, to belittle human life and values and virtues and civilization and moral, spiritual, and religious discoveries, which is all we human beings possess or ever will, they have outrun their own empiricism. They are abusing their cultural standing. Science has become an international bully.
Nowhere is its bullying more outrageous than in its assault on the phenomenon known as subjectivity.
Your subjective, conscious experience is just as real as the tree outside your window or the photons striking your retina—even though you alone feel it. Many philosophers and scientists today tend to dismiss the subjective and focus wholly on an objective, third-person reality—a reality that would be just the same if men had no minds. They treat subjective reality as a footnote, or they ignore it, or they announce that, actually, it doesn’t even exist.
If scientists were rat-catchers, it wouldn’t matter. But right now, their views are threatening all sorts of intellectual and spiritual fields. The present problem originated at the intersection of artificial intelligence and philosophy of mind—in the question of what consciousness and mental states are all about, how they work, and what it would mean for a robot to have them. It has roots that stretch back to the behaviorism of the early 20th century, but the advent of computing lit the fuse of an intellectual crisis that blasted off in the 1960s and has been gaining altitude ever since.
The modern “mind fields” encompass artificial intelligence, cognitive psychology, and philosophy of mind. Researchers in these fields are profoundly split, and the chaos was on display in the ugliness occasioned by the publication of Thomas Nagel’s Mind & Cosmos in 2012. Nagel is an eminent philosopher and professor at NYU. In Mind & Cosmos, he shows with terse, meticulous thoroughness why mainstream thought on the workings of the mind is intellectually bankrupt. He explains why Darwinian evolution is insufficient to explain the emergence of consciousness—the capacity to feel or experience the world. He then offers his own ideas on consciousness, which are speculative, incomplete, tentative, and provocative—in the tradition of science and philosophy.
Nagel was immediately set on and (symbolically) beaten to death by all the leading punks, bullies, and hangers-on of the philosophical underworld. Attacking Darwin is the sin against the Holy Ghost that pious scientists are taught never to forgive. Even worse, Nagel is an atheist unwilling to express sufficient hatred of religion to satisfy other atheists. There is nothing religious about Nagel’s speculations; he believes that science has not come far enough to explain consciousness and that it must press on. He believes that Darwin is not sufficient.
The intelligentsia was so furious that it formed a lynch mob. In May 2013, the Chronicle of Higher Education ran a piece called “Where Thomas Nagel Went Wrong.” One paragraph was notable:
Whatever the validity of [Nagel’s] stance, its timing was certainly bad. The war between New Atheists and believers has become savage, with Richard Dawkins writing sentences like, “I have described atonement, the central doctrine of Christianity, as vicious, sadomasochistic, and repellent. We should also dismiss it as barking mad….” In that climate, saying anything nice at all about religion is a tactical error.
It’s the cowardice of the Chronicle’s statement that is alarming—as if the only conceivable response to a mass attack by killer hyenas were to run away. Nagel was assailed; almost everyone else ran.
The Kurzweil Cult.
The voice most strongly associated with what I’ve termed roboticism is that of Ray Kurzweil, a leading technologist and inventor. The Kurzweil Cult teaches that, given the strong and ever-increasing pace of technological progress and change, a fateful crossover point is approaching. He calls this point the “singularity.” After the year 2045 (mark your calendars!), machine intelligence will dominate human intelligence to the extent that men will no longer understand machines any more than potato chips understand mathematical topology. Men will already have begun an orgy of machinification—implanting chips in their bodies and brains, and fine-tuning their own and their children’s genetic material. Kurzweil believes in “transhumanism,” the merging of men and machines. He believes human immortality is just around the corner. He works for Google.
Whether he knows it or not, Kurzweil believes in and longs for the death of mankind. Because if things work out as he predicts, there will still be life on Earth, but no human life. To predict that a man who lives forever and is built mainly of semiconductors is still a man is like predicting that a man with stainless steel skin, a small nuclear reactor for a stomach, and an IQ of 10,000 would still be a man. In fact we have no idea what he would be.
Each change in him might be defended as an improvement, but man as we know him is the top growth on a tall tree in a large forest: His kinship with his parents and ancestors and mankind at large, the experience of seeing his own reflection in human history and his fellow man—those things are the crucial part of who he is. If you make him grossly different, he is lost, with no reflection anywhere he looks. If you make lots of people grossly different, they are all lost together—cut adrift from their forebears, from human history and human experience. Of course we do know that whatever these creatures are, untransformed men will be unable to keep up with them. Their superhuman intelligence and strength will extinguish mankind as we know it, or reduce men to slaves or dogs. To wish for such a development is to play dice with the universe.
Luckily for mankind, there is (of course) no reason to believe that brilliant progress in any field will continue, much less accelerate; imagine predicting the state of space exploration today based on the events of 1960–1972. But the real flaw in the Kurzweil Cult’s sickening predictions is that machines do just what we tell them to. They act as they are built to act. We might in principle, in the future, build an armor-plated robot with a stratospheric IQ that refuses on principle to pay attention to human beings. Or an average dog lover might buy a German shepherd and patiently train it to rip him to shreds. Both deeds are conceivable, but in each case, sane persons are apt to intervene before the plan reaches completion.
Subjectivity is your private experience of the world: your sensations; your mental life and inner landscape; your experiences of sweet and bitter, blue and gold, soft and hard; your beliefs, plans, pains, hopes, fears, theories, imagined vacation trips and gardens and girlfriends and Ferraris, your sense of right and wrong, good and evil. This is your subjective world. It is just as real as the objective physical world.
This is why the idea of objective reality is a masterpiece of Western thought—an idea we associate with Galileo and Descartes and other scientific revolutionaries of the 17th century. The only view of the world we can ever have is subjective, from inside our own heads. That we can agree nonetheless on the observable, exactly measurable, and predictable characteristics of objective reality is a remarkable fact. I can’t know that the color I call blue looks to me the same way it looks to you. And yet we both use the word blue to describe this color, and common sense suggests that your experience of blue is probably a lot like mine. Our ability to transcend the subjective and accept the existence of objective reality is the cornerstone of everything modern science has accomplished.
But that is not enough for the philosophers of mind. Many wish to banish subjectivity altogether. “The history of philosophy of mind over the past one hundred years,” the eminent philosopher John Searle has written, “has been in large part an attempt to get rid of the mental”—i.e., the subjective—“by showing that no mental phenomena exist over and above physical phenomena.”
Why bother? Because to present-day philosophers, Searle writes, “the subjectivist ontology of the mental seems intolerable.” That is, your states of mind (your desire for adventure, your fear of icebergs, the ship you imagine, the girl you recall) exist only subjectively, within your mind, and they can be examined and evaluated by you alone. They do not exist objectively. They are strictly internal to your own mind. And yet they do exist. This is intolerable! How in this modern, scientific world can we be forced to accept the existence of things that can’t be weighed or measured, tracked or photographed—that are strictly private, that can be observed by exactly one person each? Ridiculous! Or at least, damned annoying.
And yet your mind is, was, and will always be a room with a view. Your mental states exist inside this room you can never leave and no one else can ever enter. The world you perceive through the window of mind (where you can never go—where no one can ever go) is the objective world. Both worlds, inside and outside, are real.
The ever astonishing Rainer Maria Rilke captured this truth vividly in the opening lines of his eighth Duino Elegy, as translated by Stephen Mitchell: “With all its eyes the natural world looks out/into the Open. Only our eyes are turned backward….We know what is really out there only from/the animal’s gaze.” We can never forget or disregard the room we are locked into forever.
The Brain as Computer.
The dominant, mainstream view of mind nowadays among philosophers and many scientists is computationalism, also known as cognitivism. This view is inspired by the idea that minds are to brains as software is to computers. “Think of the brain,” writes Daniel Dennett of Tufts University in his influential 1991 Consciousness Explained, “as a computer.” In some ways this is an apt analogy. In others, it is crazy. At any rate, it is one of the intellectual milestones of modern times.
How did this “master analogy” become so influential?
Consider the mind. The mind has its own structure and laws: It has desires, emotions, imagination; it is conscious. But no mind can exist apart from the brain that “embodies” it. Yet the brain’s structure is different from the mind’s. The brain is a dense tangle of neurons and other cells in which neurons send electrical signals to other neurons downstream via a wash of neurotransmitter chemicals, like beach bums splashing each other with bucketfuls of water.
Two wholly different structures, one embodied by the other—this is also a precise description of computer software as it relates to computer hardware. Software has its own structure and laws (software being what any “program” or “application” is made of—any email program, web search engine, photo album, iPhone app, video game, anything at all). Software consists of lists of instructions that are given to the hardware—to a digital computer. Each instruction specifies one picayune operation on the numbers stored inside the computer. For example: Add two numbers. Move a number from one place to another. Look at some number and do this if it’s 0.
Large lists of tiny instructions become complex mathematical operations, and large bunches of those become even more sophisticated operations. And pretty soon your application is sending spacemen hurtling across your screen firing lasers at your avatar as you pelt the aliens with tennis balls and chat with your friends in Idaho or Algiers while sending notes to your girlfriend and keeping an eye on the comic-book news. You are swimming happily within the rich coral reef of your software “environment,” and the tiny instructions out of which the whole thing is built don’t matter to you at all. You don’t know them, can’t see them, are wholly unaware of them.
The gorgeously varied reefs called software are a topic of their own—just as the mind is. Software and computers are two different topics, just as the psychological or phenomenal study of mind is different from brain physiology. Even so, software cannot exist without digital computers, just as minds cannot
exist without brains.
That is why today’s mainstream view of mind is based on exactly this analogy: Mind is to brain as software is to computer. The mind is the brain’s software—this is the core idea of computationalism.
Of course computationalists don’t all think alike. But they all believe in some version of this guiding analogy. Drew McDermott, my colleague in the computer science department at Yale University, is one of the most brilliant (and in some ways, the most heterodox) of computationalists. “The biological variety of computers differs in many ways from the kinds of computers engineers build,” he writes, “but the differences are superficial.” Note here that by biological computer, McDermott means brain.
McDermott believes that “computers can have minds”—minds built out of software, if the software is correctly conceived. In fact, McDermott writes, “as far as science is concerned, people are just a strange kind of animal that arrived fairly late on the scene….[My] purpose…is to increase the plausibility of the hypothesis that we are machines and to elaborate some of its consequences.”
John Heil of Washington University describes cognitivism this way: “Think about states of mind as something like strings of symbols, sentences.” In other words: a state of mind is like a list of numbers in a computer. And so, he writes, “mental operations are taken to be ‘computations over symbols.’” Thus, in the cognitivist view, when you decide, plan, or believe, you are computing, in the sense that software computes.
But what about consciousness? If the brain is merely a mechanism for thinking or problem-solving, how does it create consciousness?
Most computationalists default to the Origins of Gravy theory set forth by Walter Matthau in the film of Neil Simon’s The Odd Couple. Challenged to account for the emergence of gravy, Matthau explains that, when you cook a roast, “it comes.” That is basically how consciousness arises too, according to computationalists. It just comes.
In Consciousness Explained, Dennett lays out the essence of consciousness as follows: “The concepts of computer science provide the crutches of imagination we need to stumble across the terra incognita between our phenomenology as we know it by ‘introspection’ and our brains as science reveals them to us.” (Note the chuckle-quotes around introspection; for Dennett, introspection is an illusion.) Specifically: “Human consciousness can best be understood as the operation of a ‘von Neumannesque’ virtual machine.” Meaning, it is a software application (a virtual machine) designed to run on any ordinary computer. (Hence von Neumannesque: the great mathematician John von Neumann is associated with the invention of the digital computer as we know it.)
Thus consciousness is the result of running the right sort of program on an organic computer also called the human brain. If you were able to download the right app on your phone or laptop, it would be conscious, too. It wouldn’t merely talk or behave as if it were conscious. It would be conscious, with the same sort of rich mental landscape inside its head (or its processor or maybe hard drive) as you have inside yours: the anxious plans, the fragile fragrant memories, the ability to imagine a baseball game or the crunch of dry leaves underfoot. All that just by virtue of running the right program. That program would be complex and sophisticated, far more clever than anything we have today. But no different fundamentally, say the computationalists, from the latest video game.
But the master analogy—between mind and software, brain and computer—is fatally flawed. It falls apart once you mull these simple facts:
1. You can transfer a program easily from one computer to another, but you can’t transfer a mind, ever, from one brain to another.
2. You can run an endless series of different programs on any one computer, but only one “program” runs, or ever can run, on any one human brain.
3. Software is transparent. I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me.
4. Computers can be erased; minds cannot.
5. Computers can be made to operate precisely as we choose; minds cannot.
There are more. Come up with them yourself. It’s easy.
There is a still deeper problem with computationalism. Mainstream computationalists treat the mind as if its purpose were merely to act and not to be. But the mind is for doing and being. Computers are machines, and idle machines are wasted. That is not true of your mind. Your mind might be wholly quiet, doing (“computing”) nothing; yet you might be feeling miserable or exalted, or awestruck by the beauty of the object in front of you, or inspired or resolute—and such moments might be the center of your mental life. Or you might merely be conscious. “I cannot see what flowers are at my feet,/Nor what soft incense hangs upon the boughs….Darkling I listen….” That was drafted by the computer known as John Keats.
Emotions in particular are not actions but states of being. And emotions are central to your mental life and can shape your behavior by allowing you to compare alternatives to determine which feels best. Jane Austen, Persuasion: “He walked to the window to recollect himself, and feel how he ought to behave.” Henry James, The Ambassadors: The heroine tells the hero, “no one feels so much as you. No—not any one.” She means that no one is so precise, penetrating, and sympathetic an observer.
Computationalists cannot account for emotion. It fits as badly as consciousness into the mind-as-software scheme.
The Body and the Mind.
And there is (at least) one more area of special vulnerability in the computationalist worldview. Computationalists believe that the mind is embodied by the brain, and the brain is simply an organic computer. But in fact, the mind is embodied not by the brain but by the brain and the body, intimately interleaved. Emotions are mental states one feels physically; thus they are states of mind and body simultaneously. (Angry, happy, awestruck, relieved—these are physical as well as mental states.) Sensations are simultaneously mental and physical phenomena. Wordsworth writes about his memories of the River Wye: “I have owed to them/In hours of weariness, sensations sweet,/Felt in the blood, and felt along the heart/And passing even into my purer mind…”
Where does the physical end and the mental begin? The resonance between mental and bodily states is a subtle but important aspect of mind. Bodily sensations bring about mental states that cause those sensations to change and, in turn, the mental states to develop further. You are embarrassed, and blush; feeling yourself blush, your embarrassment increases. Your blush deepens. “A smile of pleasure lit his face. Conscious of that smile, [he] shook his head disapprovingly at his own state.” (Tolstoy.) As Dmitry Merezhkovsky writes brilliantly in his classic Tolstoy study, “Certain feelings impel us to corresponding movements, and, on the other hand, certain habitual movements impel to the corresponding mental states….Tolstoy, with inimitable art, uses this convertible connection between the internal and the external.”
All such mental phenomena depend on something like a brain and something like a body, or an accurate reproduction or simulation of certain aspects of the body. However hard or easy you rate the problem of building such a reproduction, computing has no wisdom to offer regarding the construction of human-like bodies—even supposing that it knows something about human-like minds.
I cite Keats or Rilke, Wordsworth, Tolstoy, Jane Austen because these “subjective humanists” can tell us, far more accurately than any scientist, what things are like inside the sealed room of the mind. When subjective humanism is recognized (under some name or other) as a school of thought in its own right, one of its characteristics will be looking to great authors for information about what the inside of the mind is like.
To say the same thing differently: Computers are information machines. They transform one batch of information into another. Computationalists often describe the mind as an “information processor.” But feelings are not information! Feelings are states of being. A feeling (mild wistfulness, say, on a warm summer morning) has, ordinarily, no information content at all. Wistful is simply a way to be.
Let’s be more precise: We are conscious, and consciousness has two aspects. To be conscious of a thing is to be aware of it (know about it, have information about it) and to experience it. Taste sweetness; see turquoise; hear an unresolved dissonance—each feels a certain way. To experience is to be some way, not to do some thing.
The whole subjective field of emotions, feelings, and consciousness fits poorly with the ideology of computationalism, and with the project of increasing “the plausibility of the hypothesis that we are machines.”
Thomas Nagel: “All these theories seem insufficient as analyses of the mental because they leave out something essential.” (My italics.) Namely? “The first-person, inner point of view of the conscious subject: for example, the way sugar tastes to you or the way red looks or anger feels.” All other mental states (not just sensations) are left out, too: beliefs and desires, pleasures and pains, whims, suspicions, longings, vague anxieties; the mental sights, sounds, and emotions that accompany your reading a novel or listening to music or daydreaming.
How could such important things be left out? Because functionalism is today’s dominant view among theorists of the mind, and functionalism leaves them out. It leaves these dirty boots on science’s back porch. Functionalism asks, “What does it mean to be, for example, thirsty?” The answer: Certain events (heat, hard work, not drinking) cause the state of mind called thirst. This state of mind, together with others, makes you want to do certain things (like take a drink). Now you understand what “I am thirsty” means. The mental (the state of thirst) has not been written out of the script, but it has been reduced to the merely physical and observable: to the weather, and what you’ve been doing, and what actions (take a drink) you plan to do.
But this explanation is no good, because “thirst” means, above all, that you feel thirsty. It is a way of being. You have a particular sensation. (That feeling, in turn, explains such expressions as “I am thirsty for knowledge,” although this “thirst” has nothing to do with the heat outside.)
And yet you can see the seductive quality of functionalism, and why it grew in prominence along with computers. No one knows how a computer can be made to feel anything, or whether such a thing is even possible. But once feeling and consciousness are eliminated, creating a computer mind becomes much easier. Nagel calls this view “a heroic triumph of ideological theory over common sense.”
Some thinkers insist otherwise. Experiencing sweetness or the fragrance of lavender or the burn of anger is merely a biochemical matter, they say. Certain neurons fire, certain neurotransmitters squirt forth into the inter-neuron gaps, other neurons fire and the problem is solved: There is your anger, lavender, sweetness.
There are two versions of this idea: Maybe brain activity causes the sensation of anger or sweetness or a belief or desire; maybe, on the other hand, it just is the sensation of anger or sweetness—sweetness is certain brain events in the sense that water is H2O.
But how do those brain events bring about, or translate into, subjective mental states? How is this amazing trick done? What does it even mean, precisely, to cross from the physical to the mental realm?
The Zombie Argument.
Understanding subjective mental states ultimately comes down to understanding consciousness. And consciousness is even trickier than it seems at first, because there is a serious, thought-provoking argument that purports to show us that consciousness is not just mysterious but superfluous. It’s called the Zombie Argument. It’s a thought experiment that goes like this:
Imagine your best friend. You’ve know him for years, have had a million discussions, arguments, and deep conversations with him; you know his opinions, preferences, habits, and characteristic moods. Is it possible to suppose (just suppose) that he is in fact a zombie?
By zombie, philosophers mean a creature who looks and behaves just like a human being, but happens to be unconscious. He does everything an ordinary person does: walks and talks, eats and sleeps, argues, shouts, drives his car, lies on the beach. But there’s no one home: He (meaning it) is actually a robot with a computer for a brain. On the outside he looks like any human being: This robot’s behavior and appearance are wonderfully sophisticated.
No evidence makes you doubt that your best friend is human, but suppose you did ask him: Are you human? Are you conscious? The robot could be programmed to answer no. But it’s designed to seem human, so more likely its software makes an answer such as, “Of course I’m human, of course I’m conscious!—talk about stupid questions. Are you conscious? Are you human, and not half-monkey? Jerk.”
So that’s a robot zombie. Now imagine a “human” zombie, an organic zombie, a freak of nature: It behaves just like you, just like the robot zombie; it’s made of flesh and blood, but it’s unconscious. Can you imagine such a creature? Its brain would in fact be just like a computer: a complex control system that makes this creature speak and act exactly like a man. But it feels nothing and is conscious of nothing.
Many philosophers (on both sides of the argument about software minds) can indeed imagine such a creature. Which leads them to the next question: What is consciousness for? What does it accomplish? Put a real human and the organic zombie side by side. Ask them any questions you like. Follow them over the course of a day or a year. Nothing reveals which one is conscious. (They both claim to be.) Both seem like ordinary humans.
So why should we humans be equipped with consciousness? Darwinian theory explains that nature selects the best creatures on wholly practical grounds, based on survivable design and behavior. If zombies and humans behave the same way all the time, one group would be just as able to survive as the other. So why would nature have taken the trouble to invent an elaborate thing like consciousness, when it could have got off without it just as well?
Such questions have led the Australian philosopher of mind David Chalmers to argue that consciousness doesn’t “follow logically” from the design of the universe as we know it scientifically. Nothing stops us from imagining a universe exactly like ours in every respect except that consciousness does not exist.
Nagel believes that “our mental lives, including our subjective experiences” are “strongly connected with and probably strictly dependent on physical events in our brains.” But—and this is the key to understanding why his book posed such a danger to the conventional wisdom in his field—Nagel also believes that explaining subjectivity and our conscious mental lives will take nothing less than a new scientific revolution. Ultimately, “conscious subjects and their mental lives” are “not describable by the physical sciences.” He awaits “major scientific advances,” “the creation of new concepts” before we can understand how consciousness works. Physics and biology as we understand them today don’t seem to have the answers.
On consciousness and subjectivity, science still has elementary work to do. That work will be done correctly only if researchers understand what subjectivity is, and why it shares the cosmos with objective reality.
Of course the deep and difficult problem of why consciousness exists doesn’t hold for Jews and Christians. Just as God anchors morality, God’s is the viewpoint that knows you are conscious. Knows and cares: Good and evil, sanctity and sin, right and wrong presuppose consciousness. When free will is understood, at last, as an aspect of emotion and not behavior—we are free just insofar as we feel free—it will also be seen to depend on consciousness.
The Iron Rod.
In her book Absence of Mind, the novelist and essayist Marilynne Robinson writes that the basic assumption in every variant of “modern thought” is that “the experience and testimony of the individual mind is to be explained away, excluded from consideration.” She tells an anecdote about an anecdote. Several neurobiologists have written about an American railway worker named Phineas Gage. In 1848, when he was 25, an explosion drove an iron rod right through his brain and out the other side. His jaw was shattered and he lost an eye; but he recovered and returned to work, behaving just as he always had—except that now he had occasional rude outbursts of swearing and blaspheming, which (evidently) he had never had before.
Neurobiologists want to show that particular personality traits (such as good manners) emerge from particular regions of the brain. If a region is destroyed, the corresponding piece of personality is destroyed. Your mind is thus the mere product of your genes and your brain. You have nothing to do with it, because there is no subjective, individual you. “You” are what you say and do. Your inner mental world either doesn’t exist or doesn’t matter. In fact you might be a zombie; that wouldn’t matter either.
Robinson asks: But what about the actual man Gage? The neurobiologists say nothing about the fact that “Gage was suddenly disfigured and half blind, that he suffered prolonged infections of the brain,” that his most serious injuries were permanent. He was 25 years old and had no hope of recovery. Isn’t it possible, she asks, that his outbursts of angry swearing meant just what they usually mean—that the man was enraged and suffering? When the brain scientists tell this story, writes Robinson, “there is no sense at all that [Gage] was a human being who thought and felt, a man with a singular and terrible fate.”
Man is only a computer if you ignore everything that distinguishes him from a computer.
The Closing of the Scientific Mind.
That science should face crises in the early 21st century is inevitable. Power corrupts, and science today is the Catholic Church around the start of the 16th century: used to having its own way and dealing with heretics by excommunication, not argument.
Science is caught up, also, in the same educational breakdown that has brought so many other proud fields low. Science needs reasoned argument and constant skepticism and open-mindedness. But our leading universities have dedicated themselves to stamping them out—at least in all political areas. We routinely provide superb technical educations in science, mathematics, and technology to brilliant undergraduates and doctoral students. But if those same students have been taught since kindergarten that you are not permitted to question the doctrine of man-made global warming, or the line that men and women are interchangeable, or the multiculturalist idea that all cultures and nations are equally good (except for Western nations and cultures, which are worse), how will they ever become reasonable, skeptical scientists? They’ve been reared on the idea that questioning official doctrine is wrong, gauche, just unacceptable in polite society. (And if you are president of Harvard, it can get you fired.)
Beset by all this mold and fungus and corruption, science has continued to produce deep and brilliant work. Most scientists are skeptical about their own fields and hold their colleagues to rigorous standards. Recent years have seen remarkable advances in experimental and applied physics, planetary exploration and astronomy, genetics, physiology, synthetic materials, computing, and all sorts of other areas.
But we do have problems, and the struggle of subjective humanism against roboticism is one of the most important.
The moral claims urged on man by Judeo-Christian principles and his other religious and philosophical traditions have nothing to do with Earth’s being the center of the solar system or having been created in six days, or with the real or imagined absence of rational life elsewhere in the universe. The best and deepest moral laws we know tell us to revere human life and, above all, to be human: to treat all creatures, our fellow humans and the world at large, humanely. To behave like a human being (Yiddish: mensch) is to realize our best selves.
No other creature has a best self.
This is the real danger of anti-subjectivism, in an age where the collapse of religious education among Western elites has already made a whole generation morally wobbly. When scientists casually toss our human-centered worldview in the trash with the used coffee cups, they are re-smashing the sacred tablets, not in blind rage as Moses did, but in casual, ignorant indifference to the fate of mankind.
A world that is intimidated by science and bored sick with cynical, empty “postmodernism” desperately needs a new subjectivist, humanist, individualist worldview. We need science and scholarship and art and spiritual life to be fully human. The last three are withering, and almost no one understands the first.
The Kurzweil Cult is attractive enough to require opposition in a positive sense; alternative futures must be clear. The cults that oppose Kurzweilism are called Judaism and Christianity. But they must and will evolve to meet new dangers in new worlds. The central text of Judeo-Christian religions in the tech-threatened, Googleplectic West of the 21st century might well be Deuteronomy 30:19: “I summon today as your witnesses the heavens and the earth: I have laid life and death before you, the blessing and the curse; choose life and live!—you are your children.”
The sanctity of life is what we must affirm against Kurzweilism and the nightmare of roboticism. Judaism has always preferred the celebration and sanctification of this life in this world to eschatological promises. My guess is that 21st-century Christian thought will move back toward its father and become increasingly Judaized, less focused on death and the afterlife and more on life here today (although my Christian friends will dislike my saying so). Both religions will teach, as they always have, the love of man for man—and that, over his lifetime (as Wordsworth writes at the very end of his masterpiece, The Prelude), “the mind of man becomes/A thousand times more beautiful than the earth/On which he dwells.”
At first, roboticism was just an intellectual school. Today it is a social disease. Some young people want to be robots (I’m serious); they eagerly await electronic chips to be implanted in their brains so they will be smarter and better informed than anyone else (except for all their friends who have had the same chips implanted). Or they want to see the world through computer glasses that superimpose messages on poor naked nature. They are terrorist hostages in love with the terrorists.
All our striving for what is good and just and beautiful and sacred, for what gives meaning to human life and makes us (as Scripture says) “just a little lower than the angels,” and a little better than rats and cats, is invisible to the roboticist worldview. In the roboticist future, we will become what we believe ourselves to be: dogs with iPhones. The world needs a new subjectivist humanism now—not just scattered protests but a growing movement, a cry from the heart.