he huge cultural authority science has acquired over the past century imposes large duties on every scientist. Scientists have acquired the power to impress and intimidate every time they open their mouths, and it is their responsibility to keep this power in mind no matter what they say or do. Too many have forgotten their obligation to approach with due respect the scholarly, artistic, religious, humanistic work that has always been mankind’s main spiritual support. Scientists are (on average) no more likely to understand this work than the man in the street is to understand quantum physics. But science used to know enough to approach cautiously and admire from outside, and to build its own work on a deep belief in human dignity. No longer.
Today science and the “philosophy of mind”—its thoughtful assistant, which is sometimes smarter than the boss—are threatening Western culture with the exact opposite of humanism. Call it roboticism. Man is the measure of all things, Protagoras said. Today we add, and computers are the measure of all men.
Many scientists are proud of having booted man off his throne at the center of the universe and reduced him to just one more creature—an especially annoying one—in the great intergalactic zoo. That is their right. But when scientists use this locker-room braggadocio to belittle the human viewpoint, to belittle human life and values and virtues and civilization and moral, spiritual, and religious discoveries, which is all we human beings possess or ever will, they have outrun their own empiricism. They are abusing their cultural standing. Science has become an international bully.
Nowhere is its bullying more outrageous than in its assault on the phenomenon known as subjectivity.
Your subjective, conscious experience is just as real as the tree outside your window or the photons striking your retina—even though you alone feel it. Many philosophers and scientists today tend to dismiss the subjective and focus wholly on an objective, third-person reality—a reality that would be just the same if men had no minds. They treat subjective reality as a footnote, or they ignore it, or they announce that, actually, it doesn’t even exist.
If scientists were rat-catchers, it wouldn’t matter. But right now, their views are threatening all sorts of intellectual and spiritual fields. The present problem originated at the intersection of artificial intelligence and philosophy of mind—in the question of what consciousness and mental states are all about, how they work, and what it would mean for a robot to have them. It has roots that stretch back to the behaviorism of the early 20th century, but the advent of computing lit the fuse of an intellectual crisis that blasted off in the 1960s and has been gaining altitude ever since.
The modern “mind fields” encompass artificial intelligence, cognitive psychology, and philosophy of mind. Researchers in these fields are profoundly split, and the chaos was on display in the ugliness occasioned by the publication of Thomas Nagel’s Mind & Cosmos in 2012. Nagel is an eminent philosopher and professor at NYU. In Mind & Cosmos, he shows with terse, meticulous thoroughness why mainstream thought on the workings of the mind is intellectually bankrupt. He explains why Darwinian evolution is insufficient to explain the emergence of consciousness—the capacity to feel or experience the world. He then offers his own ideas on consciousness, which are speculative, incomplete, tentative, and provocative—in the tradition of science and philosophy.
Nagel was immediately set on and (symbolically) beaten to death by all the leading punks, bullies, and hangers-on of the philosophical underworld. Attacking Darwin is the sin against the Holy Ghost that pious scientists are taught never to forgive. Even worse, Nagel is an atheist unwilling to express sufficient hatred of religion to satisfy other atheists. There is nothing religious about Nagel’s speculations; he believes that science has not come far enough to explain consciousness and that it must press on. He believes that Darwin is not sufficient.
The intelligentsia was so furious that it formed a lynch mob. In May 2013, the Chronicle of Higher Education ran a piece called “Where Thomas Nagel Went Wrong.” One paragraph was notable:
Whatever the validity of [Nagel’s] stance, its timing was certainly bad. The war between New Atheists and believers has become savage, with Richard Dawkins writing sentences like, “I have described atonement, the central doctrine of Christianity, as vicious, sadomasochistic, and repellent. We should also dismiss it as barking mad….” In that climate, saying anything nice at all about religion is a tactical error.
It’s the cowardice of the Chronicle’s statement that is alarming—as if the only conceivable response to a mass attack by killer hyenas were to run away. Nagel was assailed; almost everyone else ran.
The Kurzweil Cult.
The voice most strongly associated with what I’ve termed roboticism is that of Ray Kurzweil, a leading technologist and inventor. The Kurzweil Cult teaches that, given the strong and ever-increasing pace of technological progress and change, a fateful crossover point is approaching. He calls this point the “singularity.” After the year 2045 (mark your calendars!), machine intelligence will dominate human intelligence to the extent that men will no longer understand machines any more than potato chips understand mathematical topology. Men will already have begun an orgy of machinification—implanting chips in their bodies and brains, and fine-tuning their own and their children’s genetic material. Kurzweil believes in “transhumanism,” the merging of men and machines. He believes human immortality is just around the corner. He works for Google.
Whether he knows it or not, Kurzweil believes in and longs for the death of mankind. Because if things work out as he predicts, there will still be life on Earth, but no human life. To predict that a man who lives forever and is built mainly of semiconductors is still a man is like predicting that a man with stainless steel skin, a small nuclear reactor for a stomach, and an IQ of 10,000 would still be a man. In fact we have no idea what he would be.
Each change in him might be defended as an improvement, but man as we know him is the top growth on a tall tree in a large forest: His kinship with his parents and ancestors and mankind at large, the experience of seeing his own reflection in human history and his fellow man—those things are the crucial part of who he is. If you make him grossly different, he is lost, with no reflection anywhere he looks. If you make lots of people grossly different, they are all lost together—cut adrift from their forebears, from human history and human experience. Of course we do know that whatever these creatures are, untransformed men will be unable to keep up with them. Their superhuman intelligence and strength will extinguish mankind as we know it, or reduce men to slaves or dogs. To wish for such a development is to play dice with the universe.
Luckily for mankind, there is (of course) no reason to believe that brilliant progress in any field will continue, much less accelerate; imagine predicting the state of space exploration today based on the events of 1960–1972. But the real flaw in the Kurzweil Cult’s sickening predictions is that machines do just what we tell them to. They act as they are built to act. We might in principle, in the future, build an armor-plated robot with a stratospheric IQ that refuses on principle to pay attention to human beings. Or an average dog lover might buy a German shepherd and patiently train it to rip him to shreds. Both deeds are conceivable, but in each case, sane persons are apt to intervene before the plan reaches completion.
Subjectivity is your private experience of the world: your sensations; your mental life and inner landscape; your experiences of sweet and bitter, blue and gold, soft and hard; your beliefs, plans, pains, hopes, fears, theories, imagined vacation trips and gardens and girlfriends and Ferraris, your sense of right and wrong, good and evil. This is your subjective world. It is just as real as the objective physical world.
This is why the idea of objective reality is a masterpiece of Western thought—an idea we associate with Galileo and Descartes and other scientific revolutionaries of the 17th century. The only view of the world we can ever have is subjective, from inside our own heads. That we can agree nonetheless on the observable, exactly measurable, and predictable characteristics of objective reality is a remarkable fact. I can’t know that the color I call blue looks to me the same way it looks to you. And yet we both use the word blue to describe this color, and common sense suggests that your experience of blue is probably a lot like mine. Our ability to transcend the subjective and accept the existence of objective reality is the cornerstone of everything modern science has accomplished.
But that is not enough for the philosophers of mind. Many wish to banish subjectivity altogether. “The history of philosophy of mind over the past one hundred years,” the eminent philosopher John Searle has written, “has been in large part an attempt to get rid of the mental”—i.e., the subjective—“by showing that no mental phenomena exist over and above physical phenomena.”
Why bother? Because to present-day philosophers, Searle writes, “the subjectivist ontology of the mental seems intolerable.” That is, your states of mind (your desire for adventure, your fear of icebergs, the ship you imagine, the girl you recall) exist only subjectively, within your mind, and they can be examined and evaluated by you alone. They do not exist objectively. They are strictly internal to your own mind. And yet they do exist. This is intolerable! How in this modern, scientific world can we be forced to accept the existence of things that can’t be weighed or measured, tracked or photographed—that are strictly private, that can be observed by exactly one person each? Ridiculous! Or at least, damned annoying.
And yet your mind is, was, and will always be a room with a view. Your mental states exist inside this room you can never leave and no one else can ever enter. The world you perceive through the window of mind (where you can never go—where no one can ever go) is the objective world. Both worlds, inside and outside, are real.
The ever astonishing Rainer Maria Rilke captured this truth vividly in the opening lines of his eighth Duino Elegy, as translated by Stephen Mitchell: “With all its eyes the natural world looks out/into the Open. Only our eyes are turned backward….We know what is really out there only from/the animal’s gaze.” We can never forget or disregard the room we are locked into forever.
The Brain as Computer.
The dominant, mainstream view of mind nowadays among philosophers and many scientists is computationalism, also known as cognitivism. This view is inspired by the idea that minds are to brains as software is to computers. “Think of the brain,” writes Daniel Dennett of Tufts University in his influential 1991 Consciousness Explained, “as a computer.” In some ways this is an apt analogy. In others, it is crazy. At any rate, it is one of the intellectual milestones of modern times.
How did this “master analogy” become so influential?
Consider the mind. The mind has its own structure and laws: It has desires, emotions, imagination; it is conscious. But no mind can exist apart from the brain that “embodies” it. Yet the brain’s structure is different from the mind’s. The brain is a dense tangle of neurons and other cells in which neurons send electrical signals to other neurons downstream via a wash of neurotransmitter chemicals, like beach bums splashing each other with bucketfuls of water.
Two wholly different structures, one embodied by the other—this is also a precise description of computer software as it relates to computer hardware. Software has its own structure and laws (software being what any “program” or “application” is made of—any email program, web search engine, photo album, iPhone app, video game, anything at all). Software consists of lists of instructions that are given to the hardware—to a digital computer. Each instruction specifies one picayune operation on the numbers stored inside the computer. For example: Add two numbers. Move a number from one place to another. Look at some number and do this if it’s 0.
Large lists of tiny instructions become complex mathematical operations, and large bunches of those become even more sophisticated operations. And pretty soon your application is sending spacemen hurtling across your screen firing lasers at your avatar as you pelt the aliens with tennis balls and chat with your friends in Idaho or Algiers while sending notes to your girlfriend and keeping an eye on the comic-book news. You are swimming happily within the rich coral reef of your software “environment,” and the tiny instructions out of which the whole thing is built don’t matter to you at all. You don’t know them, can’t see them, are wholly unaware of them.
The gorgeously varied reefs called software are a topic of their own—just as the mind is. Software and computers are two different topics, just as the psychological or phenomenal study of mind is different from brain physiology. Even so, software cannot exist without digital computers, just as minds cannot
exist without brains.
That is why today’s mainstream view of mind is based on exactly this analogy: Mind is to brain as software is to computer. The mind is the brain’s software—this is the core idea of computationalism.
Of course computationalists don’t all think alike. But they all believe in some version of this guiding analogy. Drew McDermott, my colleague in the computer science department at Yale University, is one of the most brilliant (and in some ways, the most heterodox) of computationalists. “The biological variety of computers differs in many ways from the kinds of computers engineers build,” he writes, “but the differences are superficial.” Note here that by biological computer, McDermott means brain.
McDermott believes that “computers can have minds”—minds built out of software, if the software is correctly conceived. In fact, McDermott writes, “as far as science is concerned, people are just a strange kind of animal that arrived fairly late on the scene….[My] purpose…is to increase the plausibility of the hypothesis that we are machines and to elaborate some of its consequences.”
John Heil of Washington University describes cognitivism this way: “Think about states of mind as something like strings of symbols, sentences.” In other words: a state of mind is like a list of numbers in a computer. And so, he writes, “mental operations are taken to be ‘computations over symbols.’” Thus, in the cognitivist view, when you decide, plan, or believe, you are computing, in the sense that software computes.
But what about consciousness? If the brain is merely a mechanism for thinking or problem-solving, how does it create consciousness?
Most computationalists default to the Origins of Gravy theory set forth by Walter Matthau in the film of Neil Simon’s The Odd Couple. Challenged to account for the emergence of gravy, Matthau explains that, when you cook a roast, “it comes.” That is basically how consciousness arises too, according to computationalists. It just comes.
In Consciousness Explained, Dennett lays out the essence of consciousness as follows: “The concepts of computer science provide the crutches of imagination we need to stumble across the terra incognita between our phenomenology as we know it by ‘introspection’ and our brains as science reveals them to us.” (Note the chuckle-quotes around introspection; for Dennett, introspection is an illusion.) Specifically: “Human consciousness can best be understood as the operation of a ‘von Neumannesque’ virtual machine.” Meaning, it is a software application (a virtual machine) designed to run on any ordinary computer. (Hence von Neumannesque: the great mathematician John von Neumann is associated with the invention of the digital computer as we know it.)
Thus consciousness is the result of running the right sort of program on an organic computer also called the human brain. If you were able to download the right app on your phone or laptop, it would be conscious, too. It wouldn’t merely talk or behave as if it were conscious. It would be conscious, with the same sort of rich mental landscape inside its head (or its processor or maybe hard drive) as you have inside yours: the anxious plans, the fragile fragrant memories, the ability to imagine a baseball game or the crunch of dry leaves underfoot. All that just by virtue of running the right program. That program would be complex and sophisticated, far more clever than anything we have today. But no different fundamentally, say the computationalists, from the latest video game.
But the master analogy—between mind and software, brain and computer—is fatally flawed. It falls apart once you mull these simple facts:
1. You can transfer a program easily from one computer to another, but you can’t transfer a mind, ever, from one brain to another.
2. You can run an endless series of different programs on any one computer, but only one “program” runs, or ever can run, on any one human brain.
3. Software is transparent. I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me.
4. Computers can be erased; minds cannot.
5. Computers can be made to operate precisely as we choose; minds cannot.
There are more. Come up with them yourself. It’s easy.
There is a still deeper problem with computationalism. Mainstream computationalists treat the mind as if its purpose were merely to act and not to be. But the mind is for doing and being. Computers are machines, and idle machines are wasted. That is not true of your mind. Your mind might be wholly quiet, doing (“computing”) nothing; yet you might be feeling miserable or exalted, or awestruck by the beauty of the object in front of you, or inspired or resolute—and such moments might be the center of your mental life. Or you might merely be conscious. “I cannot see what flowers are at my feet,/Nor what soft incense hangs upon the boughs….Darkling I listen….” That was drafted by the computer known as John Keats.
Emotions in particular are not actions but states of being. And emotions are central to your mental life and can shape your behavior by allowing you to compare alternatives to determine which feels best. Jane Austen, Persuasion: “He walked to the window to recollect himself, and feel how he ought to behave.” Henry James, The Ambassadors: The heroine tells the hero, “no one feels so much as you. No—not any one.” She means that no one is so precise, penetrating, and sympathetic an observer.
Computationalists cannot account for emotion. It fits as badly as consciousness into the mind-as-software scheme.
The Body and the Mind.
And there is (at least) one more area of special vulnerability in the computationalist worldview. Computationalists believe that the mind is embodied by the brain, and the brain is simply an organic computer. But in fact, the mind is embodied not by the brain but by the brain and the body, intimately interleaved. Emotions are mental states one feels physically; thus they are states of mind and body simultaneously. (Angry, happy, awestruck, relieved—these are physical as well as mental states.) Sensations are simultaneously mental and physical phenomena. Wordsworth writes about his memories of the River Wye: “I have owed to them/In hours of weariness, sensations sweet,/Felt in the blood, and felt along the heart/And passing even into my purer mind…”
Where does the physical end and the mental begin? The resonance between mental and bodily states is a subtle but important aspect of mind. Bodily sensations bring about mental states that cause those sensations to change and, in turn, the mental states to develop further. You are embarrassed, and blush; feeling yourself blush, your embarrassment increases. Your blush deepens. “A smile of pleasure lit his face. Conscious of that smile, [he] shook his head disapprovingly at his own state.” (Tolstoy.) As Dmitry Merezhkovsky writes brilliantly in his classic Tolstoy study, “Certain feelings impel us to corresponding movements, and, on the other hand, certain habitual movements impel to the corresponding mental states….Tolstoy, with inimitable art, uses this convertible connection between the internal and the external.”
All such mental phenomena depend on something like a brain and something like a body, or an accurate reproduction or simulation of certain aspects of the body. However hard or easy you rate the problem of building such a reproduction, computing has no wisdom to offer regarding the construction of human-like bodies—even supposing that it knows something about human-like minds.
I cite Keats or Rilke, Wordsworth, Tolstoy, Jane Austen because these “subjective humanists” can tell us, far more accurately than any scientist, what things are like inside the sealed room of the mind. When subjective humanism is recognized (under some name or other) as a school of thought in its own right, one of its characteristics will be looking to great authors for information about what the inside of the mind is like.
To say the same thing differently: Computers are information machines. They transform one batch of information into another. Computationalists often describe the mind as an “information processor.” But feelings are not information! Feelings are states of being. A feeling (mild wistfulness, say, on a warm summer morning) has, ordinarily, no information content at all. Wistful is simply a way to be.
Let’s be more precise: We are conscious, and consciousness has two aspects. To be conscious of a thing is to be aware of it (know about it, have information about it) and to experience it. Taste sweetness; see turquoise; hear an unresolved dissonance—each feels a certain way. To experience is to be some way, not to do some thing.
The whole subjective field of emotions, feelings, and consciousness fits poorly with the ideology of computationalism, and with the project of increasing “the plausibility of the hypothesis that we are machines.”
Thomas Nagel: “All these theories seem insufficient as analyses of the mental because they leave out something essential.” (My italics.) Namely? “The first-person, inner point of view of the conscious subject: for example, the way sugar tastes to you or the way red looks or anger feels.” All other mental states (not just sensations) are left out, too: beliefs and desires, pleasures and pains, whims, suspicions, longings, vague anxieties; the mental sights, sounds, and emotions that accompany your reading a novel or listening to music or daydreaming.
How could such important things be left out? Because functionalism is today’s dominant view among theorists of the mind, and functionalism leaves them out. It leaves these dirty boots on science’s back porch. Functionalism asks, “What does it mean to be, for example, thirsty?” The answer: Certain events (heat, hard work, not drinking) cause the state of mind called thirst. This state of mind, together with others, makes you want to do certain things (like take a drink). Now you understand what “I am thirsty” means. The mental (the state of thirst) has not been written out of the script, but it has been reduced to the merely physical and observable: to the weather, and what you’ve been doing, and what actions (take a drink) you plan to do.
But this explanation is no good, because “thirst” means, above all, that you feel thirsty. It is a way of being. You have a particular sensation. (That feeling, in turn, explains such expressions as “I am thirsty for knowledge,” although this “thirst” has nothing to do with the heat outside.)
And yet you can see the seductive quality of functionalism, and why it grew in prominence along with computers. No one knows how a computer can be made to feel anything, or whether such a thing is even possible. But once feeling and consciousness are eliminated, creating a computer mind becomes much easier. Nagel calls this view “a heroic triumph of ideological theory over common sense.”
Some thinkers insist otherwise. Experiencing sweetness or the fragrance of lavender or the burn of anger is merely a biochemical matter, they say. Certain neurons fire, certain neurotransmitters squirt forth into the inter-neuron gaps, other neurons fire and the problem is solved: There is your anger, lavender, sweetness.
There are two versions of this idea: Maybe brain activity causes the sensation of anger or sweetness or a belief or desire; maybe, on the other hand, it just is the sensation of anger or sweetness—sweetness is certain brain events in the sense that water is H2O.
But how do those brain events bring about, or translate into, subjective mental states? How is this amazing trick done? What does it even mean, precisely, to cross from the physical to the mental realm?
The Zombie Argument.
Understanding subjective mental states ultimately comes down to understanding consciousness. And consciousness is even trickier than it seems at first, because there is a serious, thought-provoking argument that purports to show us that consciousness is not just mysterious but superfluous. It’s called the Zombie Argument. It’s a thought experiment that goes like this:
Imagine your best friend. You’ve know him for years, have had a million discussions, arguments, and deep conversations with him; you know his opinions, preferences, habits, and characteristic moods. Is it possible to suppose (just suppose) that he is in fact a zombie?
By zombie, philosophers mean a creature who looks and behaves just like a human being, but happens to be unconscious. He does everything an ordinary person does: walks and talks, eats and sleeps, argues, shouts, drives his car, lies on the beach. But there’s no one home: He (meaning it) is actually a robot with a computer for a brain. On the outside he looks like any human being: This robot’s behavior and appearance are wonderfully sophisticated.
No evidence makes you doubt that your best friend is human, but suppose you did ask him: Are you human? Are you conscious? The robot could be programmed to answer no. But it’s designed to seem human, so more likely its software makes an answer such as, “Of course I’m human, of course I’m conscious!—talk about stupid questions. Are you conscious? Are you human, and not half-monkey? Jerk.”
So that’s a robot zombie. Now imagine a “human” zombie, an organic zombie, a freak of nature: It behaves just like you, just like the robot zombie; it’s made of flesh and blood, but it’s unconscious. Can you imagine such a creature? Its brain would in fact be just like a computer: a complex control system that makes this creature speak and act exactly like a man. But it feels nothing and is conscious of nothing.
Many philosophers (on both sides of the argument about software minds) can indeed imagine such a creature. Which leads them to the next question: What is consciousness for? What does it accomplish? Put a real human and the organic zombie side by side. Ask them any questions you like. Follow them over the course of a day or a year. Nothing reveals which one is conscious. (They both claim to be.) Both seem like ordinary humans.
So why should we humans be equipped with consciousness? Darwinian theory explains that nature selects the best creatures on wholly practical grounds, based on survivable design and behavior. If zombies and humans behave the same way all the time, one group would be just as able to survive as the other. So why would nature have taken the trouble to invent an elaborate thing like consciousness, when it could have got off without it just as well?
Such questions have led the Australian philosopher of mind David Chalmers to argue that consciousness doesn’t “follow logically” from the design of the universe as we know it scientifically. Nothing stops us from imagining a universe exactly like ours in every respect except that consciousness does not exist.
Nagel believes that “our mental lives, including our subjective experiences” are “strongly connected with and probably strictly dependent on physical events in our brains.” But—and this is the key to understanding why his book posed such a danger to the conventional wisdom in his field—Nagel also believes that explaining subjectivity and our conscious mental lives will take nothing less than a new scientific revolution. Ultimately, “conscious subjects and their mental lives” are “not describable by the physical sciences.” He awaits “major scientific advances,” “the creation of new concepts” before we can understand how consciousness works. Physics and biology as we understand them today don’t seem to have the answers.
On consciousness and subjectivity, science still has elementary work to do. That work will be done correctly only if researchers understand what subjectivity is, and why it shares the cosmos with objective reality.
Of course the deep and difficult problem of why consciousness exists doesn’t hold for Jews and Christians. Just as God anchors morality, God’s is the viewpoint that knows you are conscious. Knows and cares: Good and evil, sanctity and sin, right and wrong presuppose consciousness. When free will is understood, at last, as an aspect of emotion and not behavior—we are free just insofar as we feel free—it will also be seen to depend on consciousness.
The Iron Rod.
In her book Absence of Mind, the novelist and essayist Marilynne Robinson writes that the basic assumption in every variant of “modern thought” is that “the experience and testimony of the individual mind is to be explained away, excluded from consideration.” She tells an anecdote about an anecdote. Several neurobiologists have written about an American railway worker named Phineas Gage. In 1848, when he was 25, an explosion drove an iron rod right through his brain and out the other side. His jaw was shattered and he lost an eye; but he recovered and returned to work, behaving just as he always had—except that now he had occasional rude outbursts of swearing and blaspheming, which (evidently) he had never had before.
Neurobiologists want to show that particular personality traits (such as good manners) emerge from particular regions of the brain. If a region is destroyed, the corresponding piece of personality is destroyed. Your mind is thus the mere product of your genes and your brain. You have nothing to do with it, because there is no subjective, individual you. “You” are what you say and do. Your inner mental world either doesn’t exist or doesn’t matter. In fact you might be a zombie; that wouldn’t matter either.
Robinson asks: But what about the actual man Gage? The neurobiologists say nothing about the fact that “Gage was suddenly disfigured and half blind, that he suffered prolonged infections of the brain,” that his most serious injuries were permanent. He was 25 years old and had no hope of recovery. Isn’t it possible, she asks, that his outbursts of angry swearing meant just what they usually mean—that the man was enraged and suffering? When the brain scientists tell this story, writes Robinson, “there is no sense at all that [Gage] was a human being who thought and felt, a man with a singular and terrible fate.”
Man is only a computer if you ignore everything that distinguishes him from a computer.
The Closing of the Scientific Mind.
That science should face crises in the early 21st century is inevitable. Power corrupts, and science today is the Catholic Church around the start of the 16th century: used to having its own way and dealing with heretics by excommunication, not argument.
Science is caught up, also, in the same educational breakdown that has brought so many other proud fields low. Science needs reasoned argument and constant skepticism and open-mindedness. But our leading universities have dedicated themselves to stamping them out—at least in all political areas. We routinely provide superb technical educations in science, mathematics, and technology to brilliant undergraduates and doctoral students. But if those same students have been taught since kindergarten that you are not permitted to question the doctrine of man-made global warming, or the line that men and women are interchangeable, or the multiculturalist idea that all cultures and nations are equally good (except for Western nations and cultures, which are worse), how will they ever become reasonable, skeptical scientists? They’ve been reared on the idea that questioning official doctrine is wrong, gauche, just unacceptable in polite society. (And if you are president of Harvard, it can get you fired.)
Beset by all this mold and fungus and corruption, science has continued to produce deep and brilliant work. Most scientists are skeptical about their own fields and hold their colleagues to rigorous standards. Recent years have seen remarkable advances in experimental and applied physics, planetary exploration and astronomy, genetics, physiology, synthetic materials, computing, and all sorts of other areas.
But we do have problems, and the struggle of subjective humanism against roboticism is one of the most important.
The moral claims urged on man by Judeo-Christian principles and his other religious and philosophical traditions have nothing to do with Earth’s being the center of the solar system or having been created in six days, or with the real or imagined absence of rational life elsewhere in the universe. The best and deepest moral laws we know tell us to revere human life and, above all, to be human: to treat all creatures, our fellow humans and the world at large, humanely. To behave like a human being (Yiddish: mensch) is to realize our best selves.
No other creature has a best self.
This is the real danger of anti-subjectivism, in an age where the collapse of religious education among Western elites has already made a whole generation morally wobbly. When scientists casually toss our human-centered worldview in the trash with the used coffee cups, they are re-smashing the sacred tablets, not in blind rage as Moses did, but in casual, ignorant indifference to the fate of mankind.
A world that is intimidated by science and bored sick with cynical, empty “postmodernism” desperately needs a new subjectivist, humanist, individualist worldview. We need science and scholarship and art and spiritual life to be fully human. The last three are withering, and almost no one understands the first.
The Kurzweil Cult is attractive enough to require opposition in a positive sense; alternative futures must be clear. The cults that oppose Kurzweilism are called Judaism and Christianity. But they must and will evolve to meet new dangers in new worlds. The central text of Judeo-Christian religions in the tech-threatened, Googleplectic West of the 21st century might well be Deuteronomy 30:19: “I summon today as your witnesses the heavens and the earth: I have laid life and death before you, the blessing and the curse; choose life and live!—you are your children.”
The sanctity of life is what we must affirm against Kurzweilism and the nightmare of roboticism. Judaism has always preferred the celebration and sanctification of this life in this world to eschatological promises. My guess is that 21st-century Christian thought will move back toward its father and become increasingly Judaized, less focused on death and the afterlife and more on life here today (although my Christian friends will dislike my saying so). Both religions will teach, as they always have, the love of man for man—and that, over his lifetime (as Wordsworth writes at the very end of his masterpiece, The Prelude), “the mind of man becomes/A thousand times more beautiful than the earth/On which he dwells.”
At first, roboticism was just an intellectual school. Today it is a social disease. Some young people want to be robots (I’m serious); they eagerly await electronic chips to be implanted in their brains so they will be smarter and better informed than anyone else (except for all their friends who have had the same chips implanted). Or they want to see the world through computer glasses that superimpose messages on poor naked nature. They are terrorist hostages in love with the terrorists.
All our striving for what is good and just and beautiful and sacred, for what gives meaning to human life and makes us (as Scripture says) “just a little lower than the angels,” and a little better than rats and cats, is invisible to the roboticist worldview. In the roboticist future, we will become what we believe ourselves to be: dogs with iPhones. The world needs a new subjectivist humanism now—not just scattered protests but a growing movement, a cry from the heart.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
The Closing of the Scientific Mind
Must-Reads from Magazine
Their coming-and-going polka—now you see ’im, now you don’t—consumed the first 10 days of March. One week Cohn was in the driver’s seat of U.S. economic policy, steering his boss into a comprehensive overhaul of the tax code and preparing him for a huge disgorgement of taxpayer money to repair some nebulous entity called “our crumbling infrastructure.” The next week Cohn had disappeared and in his place at the president’s side Navarro suddenly materialized. With Navarro’s encouragement, the president unexpectedly announced hefty, world-wobbling tariffs on steel and aluminum imports. At first the financial markets tumbled, and nobody in Washington, including the president’s friends, seemed happy. Nobody, that is, except Navarro, whose Cheshire-cat grin quickly became unavoidable on the alphabet-soup channels of cable news. It’s the perfect place for him, front and center, trying to disentangle the conflicting strands of the president’s economic policy. Far more than Cohn, the president’s newest and most powerful economic adviser is a suitable poster boy for Trumpism, whatever that might be.
So where, the capital wondered, did this Navarro fellow come from? (The question So where did this Cohn guy go? barely lasted a news cycle.) Insiders and political obsessives dimly remembered Navarro from Trump’s presidential campaign. With Wilbur Ross, now the secretary of commerce, Navarro wrote the most articulate brief for the Trump economic plan in the months before the election, which by my reckoning occurred roughly 277 years ago. (Ross is also Navarro’s co-conspirator in pushing the steel tariffs. They’re an Odd Couple indeed: Navarro is well-coiffed and tidy and as smooth as a California anchorman, while Ross is what Barney Fife might have looked like if he’d given up his job as Mayberry’s deputy sheriff and gotten a degree in mortuary science.) The Navarro-Ross paper drew predictable skepticism from mainstream economists and their proxies in the press, particularly its eye-popping claim that Trump’s “trade policy reforms” would generate an additional $1.7 trillion in government revenue over the next 10 years.
Navarro is nominally a professor at University of California, Irvine. His ideological pedigree, like the president’s, is that of a mongrel. After a decade securing tenure by writing academic papers (“A Critical Comparison of Utility-type Ratemaking Methodologies in Oil Pipeline Regulation”), he set his attention on politics. In the 1990s, he earned the distinction of losing four political races in six years, all in San Diego or its surrounding suburbs—one for mayor, another for county supervisor, another for city council. He was a Democrat in those days, as Trump was; he campaigned against sprawl and for heavy environmental regulation. In 1996, he ran for Congress as “The Democrat Newt Gingrich Fears Most.” The TV actor Ed Asner filmed a commercial for him. This proved less helpful than hoped when his Republican opponent reminded voters that a few years earlier, Asner had been a chief fundraiser for the Communist guerrillas in El Salvador.
After that defeat, Navarro got the message and retired from politics. He returned to teaching, became an off-and-on-again Republican, and set about writing financial potboilers, mostly on investment strategies for a world increasingly unreceptive to American leadership. One of them, Death by China (2011), purported to describe the slow but inexorable sapping of American wealth and spirit through Chinese devilry. As it happened, this was Donald Trump’s favorite theme as well. From the beginning of his 40-year public career, Trump has stuck to his insistence that someone, in geo-economic terms, is bullying this great country of his. The identity of the bully has varied over time: In the 1980s, it was the Soviets who, following their cataclysmic implosion, gave way to Japan, which was replaced, after its own economic collapse, by America’s neighbors to the north and south, who have been joined, since the end of the last decade, by China. In Death by China, the man, the moment, and the message came together with perfect timing. Trump loved it.
It’s not clear that he read it, however. Trump is a visual learner, as the educational theorists used to say. He will retain more from Fox and Friends as he constructs his hair in the morning than from a half day buried in a stack of white papers from the Department of Labor. When Navarro decided to make a movie of the book, directed by himself, Trump attended a screening and lustily endorsed it. You can see why. Navarro’s use of animation is spare but compelling; the most vivid image shows a dagger of Asiatic design plunging (up to the hilt and beyond!) into the heart of a two-dimensional map of the U.S., causing the country’s blood to spray wildly across the screen, then seep in rivulets around the world. It’s Wes Cravenomics.
Most of the movie, however, is taken up by talking heads. Nearly everyone of these heads is attached to a left-wing Democrat, a socialist, or, in a couple of instances, an anarchist from the Occupy movement. Watched today, Death by China is a reminder of how lonely—how marginal—the anti-China obsession has been. This is not to its discredit; yesterday’s fringe often becomes today’s mainstream, just as today’s consensus is often disproved by the events of tomorrow. Not so long ago, for instance, the establishment catechism declared that economic liberalization and the prosperity it created led inexorably to political liberalization; from free markets, we were told, came free societies. In the last generation, China has put this fantasy to rest. Only the willfully ignorant would deny that the behavior of the Chinese government, at home and abroad, is the work of swine. Even so, the past three presidents have seen China only as a subject for scolding, never retaliation.
And this brings us to another mystery of Trumpism, as Navarro embodies it. Retaliation against China and its bullying trade practices is exactly what Trump has promised as both candidate and president. More than a year into his presidency, with his tariffs on steel and aluminum, he has struck against the bullies at last, just as he vowed to do. And the bullies, we discover, are mostly our friends—Germans, Brazilians, South Koreans, and other partners who sell us their aluminum and steel for less than we can make it ourselves. Accounting for 2 percent of U.S. steel imports, the Chinese are barely scratched in the president’s first great foray in protectionism.
In announcing the tariffs, Trump cited Chinese “dumping,” as if out of habit. Yet Navarro himself seems at a loss to explain why he and his boss have chosen to go after our friends instead of our preeminent adversary in world trade. “China is in many ways the root of the problem for all countries of the world in aluminum and steel,” he told CNN the day after the tariffs were announced. Really? How’s that? “The bigger picture is, China has tremendous overcapacity in both aluminum and steel. So what they do is, they flood the world market, and this trickles down to our shores, and to other countries.”
If that wasn’t confusing enough, we had only to wait three days. By then Navarro was telling other interviewers, “This has nothing to do with China, directly or indirectly.”
This is not the first time Trumpism has shown signs of incoherence. With Peter Navarro at the president’s side, and with Gary Cohn a fading memory, it is unlikely to be the last.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Review of 'Political Tribes' By Amy Chua
Amy Chua has an explanation for what ails us at home and abroad: Elites keep ignoring the primacy of tribalism both in the United States and elsewhere and so are blindsided every time people act in accordance with their group instinct. In Political Tribes, she offers a survey of tribal dynamics around the globe and renders judgments about the ways in which the United States has serially misread us-and-them conflicts. In the book’s final chapters, Chua, a Yale University law professor best known for her parenting polemic Battle Hymn of the Tiger Mother, focuses on the clashing group instincts that now threaten to sunder the American body politic.
As Chua sees it, “our blindness to political tribalism abroad reflects America at both its best and worst.” Because the United States is a nation made up of diverse immigrant populations—a “supergroup”—Americans can sometimes underestimate how hard it is for people in other countries to set aside their religious or ethnic ties and find common national purpose. That’s American ignorance in its most optimistic and benevolent form. But then there’s the more noxious variety: “In some cases, like Vietnam,” she writes, “ethnically blind racism has been part of our obliviousness.”
During the Vietnam War, Chua notes, the United States failed to distinguish between the ethnically homogeneous Vietnamese majority and the Chinese minority who were targets of mass resentment. In Vietnam, national identity was built largely on historical accounts of the courageous heroes who had been repelling Chinese invaders since 111 b.c.e., when China first conquered its neighbor to the south. This defining antipathy toward the Chinese was exacerbated by the fact that Vietnam’s Chinese minority was on average far wealthier and more politically powerful than the ethnic Vietnamese masses. “Yet astonishingly,” writes Chua, “U.S. foreign policy makers during the Cold War were so oblivious to Vietnamese history that they thought Vietnam was China’s pawn—merely ‘a stalking horse for Beijing in Southeast Asia.’”
Throughout the book, Chua captures tribal conflicts in clear and engrossing prose. But as a guide to foreign policy, one gets the sense that her emphasis on tribal ties might not be able to do all the work she expects of it. The first hint comes in her Vietnam analysis. If American ignorance of Chinese–Vietnam tensions is to blame for our having fought and lost the war, what would a better understanding of such things have yielded? She gets to that, sort of. “Could we have supported Ho [Chi Minh] against the French, capitalizing on Vietnam’s historical hostility toward China to keep the Vietnamese within our sphere of influence?” Chua asks. “We’ll never know. Somehow we never saw or took seriously the enmity between Vietnam and China.” It’s hard to see the U.S.’s backing a mass-murdering Communist against a putatively democratic ally as anything but a surreal thought experiment, let alone a lost opportunity.
On Afghanistan, Chua is correct about a number of things. There are indeed long-simmering tensions between Pashtuns, Punjabs, and other tribes in the region. The U.S. did pay insufficient attention to Afghanistan in the decade leading up to 9/11. The Taliban did play on Pashtun aspirations to fuel their rise. But how, exactly, are we to understand our failures in Afghanistan as resulting from ignorance of tribal relations? The Taliban went on to forge a protective agreement with al-Qaeda that had little if anything to do with tribal ties. And it was that relationship that had tragic consequences for the United States.
Not only was Osama bin Laden not Pashtun; he was an Arab millionaire, and his terrorist organization was made up of jihadists from all around the world. If anything, it was Bin Laden’s trans-tribal movement that the U.S. should have been focused on. The Taliban-al-Qaeda alliance was based on pooling resources against perceived common threats, compatible (but not identical) religious notions, and large cash payments from Bin Laden. No American understanding of tribal relations could have interfered with that.
And while an ambitious tribe-savvy counterinsurgency strategy might have gone a long way in helping the U.S.’s war effort, there has never been broad public support for such a commitment. Ultimately, our problems in Afghanistan have less to do with neglecting tribal politics and more to do with general neglect.
In Chua’s chapter on the Iraq War, however, her paradigm aligns more closely with the facts. “Could we have done better if we hadn’t been so blind to tribal politics in Iraq?” she asks. “There’s very good evidence that the answer is yes.” Here Chua offers a concise account of the U.S.’s successful 2007 troop surge. “While the additional U.S. soldiers—sent primarily to Baghdad and Al Anbar Province—were of course a critical factor,” she writes, “the surge succeeded only because it was accompanied by a 180-degree shift in our approach to the local population.”
Chua goes into colorful detail about then colonel H.R. McMaster’s efforts to educate American troops in local Iraqi customs and his decision to position them among the local population in Tal Afar. This won the trust of Iraqis who were forthcoming with critical intelligence. She also covers the work of Col. Sean MacFarland who forged relationships with Sunni sheikhs. Those sheikhs, in turn, convinced their tribespeople to work with U.S. forces and function as a local police force. Finally, Chua explains how Gen. David Petraeus combined the work of McMaster and MacFarland and achieved the miraculous in pacifying Baghdad. In spite of U.S. gains—and the successful navigation of tribes—there was little American popular will to keep Iraq on course and, over the next few years, the country inevitably unraveled.I n writing about life in the United States, Chua is on firmer ground altogether, and her diagnostic powers are impressive. “It turns out that in America, there’s a chasm between the tribal identities of the country’s haves and have-nots,” she writes, “a chasm of the same kind wreaking political havoc in many developing and non-Western countries.” In the U.S., however, there’s a crucial difference to this dynamic, and Chua puts her finger right on it: “In America, it’s the progressive elites who have taken it upon themselves to expose the American Dream as false. This is their form of tribalism.”
She backs up this contention with statistics. Some of the most interesting revelations have to do with the Occupy movement. In actual fact, those who gathered in cities across the country to protest systemic inequality in 2012 were “disproportionately affluent.” In fact, “more than half had incomes of $75,000 or more.” Occupy faded away, as she notes, because it “attracted so few members from the many disadvantaged groups it purported to be fighting for.” Chua puts things in perspective: “Imagine if the suffragette movement hadn’t included large numbers of women, or if the civil-rights movement included very few African Americans, or if the gay-rights movement included very few gays.” America’s poorer classes, for their part, are “deeply patriotic, even if they feel they’re losing the country to distant elites who know nothing about them.”
Chua is perceptive on both the inhabitants of Trump Country and the elites who disdain them. She takes American attitudes toward professional wrestling as emblematic of the split between those who support Donald Trump and those who detest him. Trump is a bona fide hero in the world of pro wrestling; he has participated in “bouts” and was actually inducted into the WWE Hall of Fame in 2013. What WWE fans get from watching wrestling they also get from watching Trump—“showmanship and symbols,” a world held together by enticing false storylines, and, ultimately, “something playfully spectacular.” Those on the academic left, on the other hand, “are fascinated, even obsessed in a horrified way, with the ‘phenomenology’ of watching professional wrestling.” In the book’s most arresting line, Chua writes that “there is now so little interaction, commonality, and intermarriage between rural/heartland/working-class whites and urban/coastal whites that the difference between them is practically what social scientists would consider an ‘ethnic difference.’”
Of course, there’s much today dividing America along racial lines as well. While Americans of color still contend with the legacy of institutional intolerance, “it is simply a fact that ‘diversity’ policies at the most select American universities and in some sectors of the economy have had a disparate adverse impact on whites.” So, both blacks and whites (and most everyone else) feel threatened to some degree. This has sharpened the edge of identity politics on the left and right. In Chua’s reading, these tribal differences will not actually break the country apart. But, she believes, they could fundamentally and irreversibly change “who we are.”
Political Tribes, however, is no doomsday prediction. Despite our clannish resentments, Chua sees, in her daily interactions, people’s willingness to form bonds beyond those of their in-group and a relaxing of tribal ties. What’s needed is for haves and have-nots, whites and blacks, liberals and conservatives to enjoy more meaningful exposure to one another. This pat prescription would come across as criminally sappy if not for the genuinely loving and patriotic way in which Chua writes about our responsibilities as a “supergroup.” “It’s not enough that we view one another as fellow human beings,” she says, “we need to view one another as fellow Americans.” Americans as a higher ontological category than human beings—there’s poetry in that. And a healthy bit of tribalism, too.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Then again, you know what happens when you assume.
“Here is my prediction,” Kristof wrote. “The new paramount leader, Xi Jinping, will spearhead a resurgence of economic reform, and probably some political easing as well. Mao’s body will be hauled out of Tiananmen Square on his watch, and Liu Xiaobo, the Nobel Peace Prize–winning writer, will be released from prison.”
True, Kristof conceded, “I may be wrong entirely.” But, he went on, “my hunch on this return to China, my old home, is that change is coming.”
Five years later, the Chinese economy, while large, is saddled with debt. Analysts and government officials are worried about its real-estate bubble. Despite harsh controls, capital continues to flee China. Nor has there been “some political easing.” On the contrary, repression has worsened. The Great Firewall blocks freedom of speech and inquiry, human-rights advocates are jailed, and the provinces resemble surveillance states out of a Philip K. Dick novel. Mao rests comfortably in his mausoleum. Not only did Liu Xiaobo remain a prisoner, he was also denied medical treatment when he contracted cancer, and he died in captivity in 2017.
As for Xi Jinping, he turned out not to be a reformer but a dictator. Steadily, under the guise of anti-corruption campaigns, Xi decimated alternative centers of power within the Communist Party. He built up a cult of personality around “Xi Jinping thought” and his “Chinese dream” of economic, cultural, and military strength. His preeminence was highlighted in October 2017 when the Politburo declined to name his successor. Then, in March of this year, the Chinese abolished the term limits that have guaranteed rotation in office since the death of Mao. Xi reigns supreme.
Bizarrely, this latest development seems to have come as a surprise to the American press. The headline of Emily Rauhala’s Washington Post article read: “China proposes removal of two-term limit, potentially paving way for President Xi Jinping to stay on.” Potentially? Xi’s accession to emperor-like status, wrote Julie Bogen of Vox, “could destabilize decades of progress toward democracy and instead move China even further toward authoritarianism.” Could? Bogen did not specify which “decades of progress toward democracy” she was talking about, but that is probably because, since 1989, there haven’t been any.
Xi’s assumption of dictatorial powers should not have shocked anyone who has paid the slightest bit of attention to recent Chinese history. The Chinese government, until last month a collective dictatorship, has exercised despotic control over its people since the very founding of the state in 1949. And yet the insatiable desire among media to incorporate news events into a preestablished storyline led reporters to cover the party announcement as a sudden reversal. Why? Because only then would the latest decision of an increasingly embattled and belligerent Chinese leadership fit into the prefabricated narrative that says we are living in an authoritarian moment.
For example, one article in the February 26, 2018, New York Times was headlined, “With Xi’s Power Grab, China Joins New Era of Strongmen.” CNN’s James Griffiths wrote, “While Chinese politics is not remotely democratic in the traditional sense, there are certain checks and balances within the Party system itself, with reformers and conservatives seeing their power and influence waxing and waning over time.” Checks and balances, reformers and conservatives—why, they are just like us, only within the context of a one-party state that ruthlessly brooks no dissent.
Now, we do happen to live in an era when democracy and autocracy are at odds. But China is not joining the “authoritarian trend.” It helped create and promote the trend. Next year, China’s “era of strongmen” will enter its seventh decade. The fundamental nature of the Communist regime in Beijing has not changed during this time.
My suspicion is that journalists were taken aback by Xi’s revelation of his true nature because they, like most Western elites, have bought into the myth of China’s “peaceful rise.” For decades, Americans have been told that China’s economic development and participation in international organizations and markets would lead inevitably to its political liberalization. What James Mann calls “the China fantasy” manifested itself in the leadership of both major political parties and in the pronouncements of the chattering class across the ideological spectrum.
Indeed, not only was the soothing scenario of China as a “responsible stakeholder” on the glide path to democracy widespread, but media figures also admonished Americans for not living up to Chinese standards. “One-party autocracy certainly has its drawbacks,” Tom Friedman conceded in an infamous 2009 column. “But when it is led by a reasonably enlightened group of people, as China is today, it can also have great advantages.” For instance, Friedman went on, “it is not an accident that China is committed to overtaking us in electric cars, solar power, energy efficiency, batteries, nuclear power, and wind power.” The following year, during an episode of Meet the Press, Friedman admitted, “I have fantasized—don’t get me wrong—but what if we could just be China for a day?” Just think of all the electric cars the government could force us to buy.
This attitude toward Chinese Communism as a public-policy exemplar became still more pronounced after Donald Trump was elected president on an “America First” agenda. China’s theft of intellectual property, industrial espionage, harassment and exploitation of Western companies, currency manipulation, mercantilist subsidies and tariffs, chronic pollution, military buildup, and interference in democratic politics and university life did not prevent it from proclaiming itself the defender of globalization and environmentalism.
When Xi visited the Davos World Economic Forum last year, the Economist noted the “fawning reception” that greeted him. The speech he delivered, pledging to uphold the international order that had facilitated his nation’s rise as well as his own, received excellent reviews. On January 15, 2017, Fareed Zakaria said, “In an America-first world, China is filling the vacuum.” A few days later, Charlie Rose told his CBS audience, “It’s almost like China is saying, ‘we are the champions of globalization, not the United States.’” And on January 30, 2017, the New York Times quoted a “Berlin-based private equity fund manager,” who said, “We heard a Chinese president becoming leader of the free world.”
The chorus of praise for China grew louder last spring when Trump announced American withdrawal from an international climate accord. In April 2017, Rick Stengel said on cable television that China is becoming “the global leader on the environment.” On June 8, a CBS reporter said that Xi is “now viewed as the world’s leader on climate change.” On June 19, 2017, on Bloomberg news, Dana Hull said, “China is the leader on climate change, especially when it comes to autos.” Also that month, one NBC anchor asked Senator Mike Lee of Utah, “Are you concerned at all that China may be seen as sort of the global leader when it comes to bringing countries together, more so than the United States?”
Last I checked, Xi Jinping’s China has not excelled at “bringing countries together,” unless—like Australia, Japan, South Korea, and Vietnam—those countries are allying with the United States to balance against China. What instead should concern Senator Lee, and all of us, is an American media filled with people suckered by foreign propaganda that happens to coincide with their political preferences, and who are unable to make elementary distinctions between tyrannical governments and consensual ones.
Choose your plan and pay nothing for six Weeks!
Marx didn’t supplant old ideas about money and commerce; he intensified them
rom the time of antiquity until the Enlightenment, trade and the pursuit of wealth were considered sinful. “In the city that is most finely governed,” Aristotle wrote, “the citizens should not live a vulgar or a merchant’s way of life, for this sort of way of life is ignoble and contrary to virtue.”1 In Plato’s vision of an ideal society (the Republic) the ruling “guardians” would own no property to avoid tearing “the city in pieces by differing about ‘mine’ and ‘not mine.’” He added that “all that relates to retail trade, and merchandise, and the keeping of taverns, is denounced and numbered among dishonourable things.” Only noncitizens would be allowed to indulge in commerce. A citizen who defies the natural order and becomes a merchant should be thrown in jail for “shaming his family.”
At his website humanprogress.org, Marian L. Tupy quotes D.C. Earl of the University of Leeds, who wrote that in Ancient Rome, “all trade was stigmatized as undignified … the word mercator [merchant] appears as almost a term of abuse.” Cicero noted in the first century b.c.e. that retail commerce is sordidus (vile) because merchants “would not make any profit unless they lied constantly.”
Early Christianity expanded this point of view. Jesus himself was clearly hostile to the pursuit of riches. “For where your treasure is,” he proclaimed in his Sermon on the Mount, “there will your heart be also.” And of course he insisted that “it is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God.”
The Catholic Church incorporated this view into its teachings for centuries, holding that economics was zero-sum. “The Fathers of the Church adhered to the classical assumption that since the material wealth of humanity was more or less fixed, the gain of some could only come at a loss to others,” the economic historian Jerry Muller explains in his book The Mind and the Market: Capitalism in Western Thought. As St. Augustine put it, “Si unus non perdit, alter non acquirit”—“If one does not lose, the other does not gain.”
The most evil form of wealth accumulation was the use of money to make money—usury. Lending money at interest was unnatural, in this view, and therefore invidious. “While expertise in exchange is justly blamed since it is not according to nature but involves taking from others,” Aristotle insisted, “usury is most reasonably hated because one’s possessions derive from money itself and not from that for which it was supplied.” In the Christian tradition, the only noble labor was physical labor, and so earning wealth from the manipulation of money was seen as inherently ignoble.
In the somewhat more prosperous and market-driven medieval period, Thomas Aquinas helped make private property and commerce more acceptable, but he did not fundamentally break with the Aristotelian view that trade was suspect and the pursuit of wealth was sinful. The merchant’s life was in conflict with the teachings of Christianity if it led to pride or avarice. “Echoing Aristotle,” Muller writes, “Aquinas reasserted that justice in the distribution of material goods was fulfilled when someone received in proportion to his status, office, and function within the institutions of an existing, structured community. Hence Aquinas decried as covetousness the accumulation of wealth to improve one’s place in the social order.”
In the medieval mind, Jews were seen as a kind of stand-in for mercantile and usurious sinfulness. Living outside the Christian community, but within the borders of Christendom, they were allowed to commit the sin of usury on the grounds that their souls were already forfeit. Pope Nicholas V insisted that it is much better that “this people should perpetrate usury than that Christians should engage in it with one another.”2 The Jews were used as a commercial caste the way the untouchables of India were used as a sanitation caste. As Montesquieu would later observe in the 16th century, “whenever one prohibits a thing that is naturally permitted or necessary, the people who engage in it are regarded as dishonest.” Thus, as Muller has argued, anti-Semitism has its roots in a kind of primitive anti-capitalism.
Early Protestantism did not reject these views. It amplified them.3 Martin Luther despised commerce. “There is on earth no greater enemy of man, after the Devil, than a gripe-money and usurer, for he wants to be God over all men…. Usury is a great, huge monster, like a werewolf …. And since we break on the wheel and behead highwaymen, murderers, and housebreakers, how much more ought we to break on the wheel and kill … hunt down, curse, and behead all usurers!”4
It should therefore come as no surprise that Luther’s views of Jews, the living manifestation of usury in the medieval mind, were just as immodest. In his 1543 treatise On the Jews and Their Lies, he offers a seven-point plan on how to deal with them:
- “First, to set fire to their synagogues or schools .…This is to be done in honor of our Lord and of Christendom, so that God might see that we are Christians …”
- “Second, I advise that their houses also be razed and destroyed.”
- “Third, I advise that all their prayer books and Talmudic writings, in which such idolatry, lies, cursing, and blasphemy are taught, be taken from them.”
- “Fourth, I advise that their rabbis be forbidden to teach henceforth on pain of loss of life and limb… ”
- “Fifth, I advise that safe-conduct on the highways be abolished completely for the Jews. For they have no business in the countryside … ”
- “Sixth, I advise that usury be prohibited to them, and that all cash and treasure of silver and gold be taken from them … ”
- “Seventh, I recommend putting a flail, an ax, a hoe, a spade, a distaff, or a spindle into the hands of young, strong Jews and Jewesses and letting them earn their bread in the sweat of their brow.… But if we are afraid that they might harm us or our wives, children, servants, cattle, etc., … then let us emulate the common sense of other nations such as France, Spain, Bohemia, etc., … then eject them forever from the country … ”
Luther agitated against the Jews throughout Europe, condemning local officials for insufficient anti-Semitism (a word that did not exist at the time and a sentiment that was not necessarily linked to more modern biological racism). His demonization of the Jews was derived from more than anti-capitalism. But his belief that the Jewish spirit of commerce was corrupting of Christianity was nonetheless central to his indictment. He sermonized again and again that it must be cleansed from Christendom, either through conversion, annihilation, or expulsion.
Three centuries later, Karl Marx would blend these ideas together in a noxious stew.
The idea at the center of virtually all of Marx’s economic writing is the labor theory of value. It holds that all of the value of any product can be determined by the number of hours it took for a laborer or laborers to produce it. From the viewpoint of conventional economics—and elementary logic—this is ludicrous. For example, ingenuity, which may not be time-consuming, is nonetheless a major source of value. Surely it cannot be true that someone who works intelligently, and therefore efficiently, provides less value than someone who works stupidly and slowly. (Marx anticipates some of these kinds of critiques with a lot of verbiage about the costs of training and skills.) But the more relevant point is simply this: The determinant of value in an economic sense is not the labor that went into a product but the price the consumer is willing to pay for it. Whether it took an hour or a week to build a mousetrap, the value of the two products is the same to the consumer if the quality is the same.
Marx had philosophical, metaphysical, and tactical reasons for holding fast to the labor theory of value. It was essential to his argument that capitalism—or what we would now call “commerce” plain and simple—was exploitative by its very nature. In Marx, the term “exploitation” takes a number of forms. It is not merely evocative of child laborers working in horrid conditions; it covers virtually all profits. If all value is captured by labor, any “surplus value” collected by the owners of capital is by definition exploitative. The businessman who risks his own money to build and staff an innovative factory is not adding value; rather, he is subtracting value from the workers. Indeed, the money he used to buy the land and the materials is really just “dead labor.” For Marx, there was an essentially fixed amount of “labor-power” in society, and extracting profit from it was akin to strip-mining a natural resource. Slavery and wage-labor were different forms of the same exploitation because both involved extracting the common resource. In fact, while Marx despised slavery, he thought wage-labor was only a tiny improvement because wage-labor reduced costs for capitalists in that they were not required to feed or clothe wage laborers.
Because Marx preached revolution, we are inclined to consider him a revolutionary. He was not. None of this was a radical step forward in economic or political thinking. It was, rather, a reaffirmation of the disdain of commerce that starts with Plato and Aristotle and found new footing in Christianity. As Jerry Muller (to whom I am obviously very indebted) writes:
To a degree rarely appreciated, [Marx] merely recast the traditional Christian stigmatization of moneymaking into a new vocabulary and reiterated the ancient suspicion against those who used money to make money. In his concept of capitalism as “exploitation” Marx returned to the very old idea that money is fundamentally unproductive, that only those who live by the sweat of their brow truly produce, and that therefore not only interest, but profit itself, is always ill-gotten.
In his book Karl Marx: A Nineteenth-Century Life, Jonathan Sperber suggests that “Marx is more usefully understood as a backward-looking figure, who took the circumstances of the first half of the nineteenth century and projected them into the future, than as a surefooted and foresighted interpreter of historical trends.”5
Marx was a classic bohemian who resented the fact that he spent his whole life living off the generosity of, first, his parents and then his collaborator Friedrich Engels. He loathed the way “the system” required selling out to the demands of the market and a career. The frustrated poet turned to the embryonic language of social science to express his angry barbaric yawp at The Man. “His critique of the stultifying effects of labor in a capitalist society,” Muller writes, “is a direct continuation of the Romantic conception of the self and its place in society.”
In other words, Marx was a romantic, not a scientist. Romanticism emerged as a rebellion against the Enlightenment, taking many forms—from romantic poetry to romantic nationalism. But central to all its forms was the belief that modern, commercial, rational life is inauthentic and alienating, and cuts us off from our true natures.
As Rousseau, widely seen as the first romantic, explained in his Discourse on the Moral Effects of the Arts and Sciences, modernity—specifically the culture of commerce and science—was oppressive. The baubles of the Enlightenment were mere “garlands of flowers” that concealed “the chains which weigh [men] down” and led people to “love their own slavery.”
This is a better context for understanding Marx’s and Engels’s hatred of the division of labor and the division of rights and duties. Their baseline assumption, like Rousseau’s, is that primitive man lived a freer and more authentic life before the rise of private property and capitalism. “Within the tribe there is as yet no difference between rights and duties,” Engels writes in Origins of the Family, Private Property, and the State. “The question whether participation in public affairs, in blood revenge or atonement, is a right or a duty, does not exist for the Indian; it would seem to him just as absurd as the question whether it was a right or a duty to sleep, eat, or hunt. A division of the tribe or of the gens into different classes was equally impossible.”
For Marx, then, the Jew might as well be the real culprit who told Eve to bite the apple. For the triumph of the Jew and the triumph of money led to the alienation of man. And in truth, the term “alienation” is little more than modern-sounding shorthand for exile from Eden. The division of labor encourages individuality, alienates us from the collective, fosters specialization and egoism, and dethrones the sanctity of the tribe. “Money is the jealous god of Israel, in face of which no other god may exist,” Marx writes. “Money degrades all the gods of man—and turns them into commodities. Money is the universal self-established value of all things. It has, therefore, robbed the whole world—both the world of men and nature—of its specific value. Money is the estranged essence of man’s work and man’s existence, and this alien essence dominates him, and he worships it.”
Marx’s muse was not analytical reason, but resentment. That is what fueled his false consciousness. To understand this fully, we should look at how that most ancient and eternal resentment—Jew-hatred—informed his worldview.
The atheist son of a Jewish convert to Lutheranism and the grandson of a rabbi, Karl Marx hated capitalism in no small part because he hated Jews. According to Marx and Engels, Jewish values placed the acquisition of money above everything else. Marx writes in his infamous essay “On the Jewish Question”:
Let us consider the actual, worldly Jew—not the Sabbath Jew … but the everyday Jew.
Let us not look for the secret of the Jew in his religion, but let us look for the secret of his religion in the real Jew.
What is the secular basis of Judaism? Practical need, self-interest. What is the worldly religion of the Jew? Huckstering. What is his worldly God? Money [Emphasis in original]
The spread of capitalism, therefore, represented a kind of conquest for Jewish values. The Jew—at least the one who set up shop in Marx’s head—makes his money from money. He adds no value. Worse, the Jews considered themselves to be outside the organic social order, Marx complained, but then again that is what capitalism encourages—individual independence from the body politic and the selfish (in Marx’s mind) pursuit of individual success or happiness. For Marx, individualism was a kind of heresy because it meant violating the sacred bond of the community. Private property empowered individuals to live as individuals “without regard to other men,” as Marx put it.
This is the essence of Marx’s view of alienation. Marx believed that people were free, creative beings but were chained to their role as laborers in the industrial machine. The division of labor inherent to capitalist society was alienating and inauthentic, pulling us out of the communitarian natural General Will. The Jew was both an emblem of this alienation and a primary author of it:
The Jew has emancipated himself in a Jewish manner, not only because he has acquired financial power, but also because, through him and also apart from him, money has become a world power and the practical Jewish spirit has become the practical spirit of the Christian nations. The Jews have emancipated themselves insofar as the Christians have become Jews. [Emphasis in original]
He adds, “The god of the Jews has become secularized and has become the god of the world. The bill of exchange is the real god of the Jew. His god is only an illusory bill of exchange.” And he concludes: “In the final analysis, the emancipation of the Jews is the emancipation of mankind from Judaism.” [Emphasis in original]
In The Holy Family, written with Engels, he argues that the most pressing imperative is to transcend “the Jewishness of bourgeois society, the inhumanity of present existence, which finds its highest embodiment in the system of money.” [Emphasis in original]
In his “Theories of Surplus Value,” he praises Luther’s indictment of usury. Luther “has really caught the character of old-fashioned usury, and that of capital as a whole.” Marx and Engels insist that the capitalist ruling classes, whether or not they claim to be Jewish, are nonetheless Jewish in spirit. “In their description of the confrontation of capital and labor, Marx and Engels resurrected the traditional critique of usury,” Muller observes. Or, as Deirdre McCloskey notes, “the history that Marx thought he perceived went with his erroneous logic that capitalism—drawing on an anticommercial theme as old as commerce—just is the same thing as greed.”6 Paul Johnson is pithier: Marx’s “explanation of what was wrong with the world was a combination of student-café anti-Semitism and Rousseau.”7
For Marx, capital and the Jew are different faces of the same monster: “The capitalist knows that all commodities—however shabby they may look or bad they may smell—are in faith and in fact money, internally circumcised Jews, and in addition magical means by which to make more money out of money.”
Marx’s writing, particularly on surplus value, is drenched with references to capital as parasitic and vampiric: “Capital is dead labor which, vampire-like, lives only by sucking living labor, and lives the more, the more labor it sucks. The time during which the worker works is the time during which the capitalist consumes the labor-power he has bought from him.” The constant allusions to the eternal wickedness of the Jew combined with his constant references to blood make it hard to avoid concluding that Marx had simply updated the blood libel and applied it to his own atheistic doctrine. His writing is replete with references to the “bloodsucking” nature of capitalism. He likens both Jews and capitalists (the same thing in his mind) to life-draining exploiters of the proletariat.
Marx writes how the extension of the workday into the night “only slightly quenches the vampire thirst for the living blood of labor,” resulting in the fact that “the vampire will not let go ‘while there remains a single muscle, sinew or drop of blood to be exploited.’” As Mark Neocleous of Brunel University documents in his brilliant essay, “The Political Economy of the Dead: Marx’s Vampires,” the images of blood and bloodsucking capital in Das Kapital are even more prominent motifs: “Capital ‘sucks up the worker’s value-creating power’ and is dripping with blood. Lacemaking institutions exploiting children are described as ‘blood-sucking,’ while U.S. capital is said to be financed by the ‘capitalized blood of children.’ The appropriation of labor is described as the ‘life-blood of capitalism,’ while the state is said to have here and there interposed itself ‘as a barrier to the transformation of children’s blood into capital.’”
Marx’s vision of exploitative, Jewish, bloodsucking capital was an expression of romantic superstition and tribal hatred. Borrowing from the medieval tradition of both Catholics as well as Luther himself, not to mention a certain folkloric poetic tradition, Marx invented a modern-sounding “scientific” theory that was in fact reactionary in every sense of the word. “If Marx’s vision was forward-looking, its premises were curiously archaic,” Muller writes. “As in the civic republican and Christian traditions, self-interest is the enemy of social cohesion and of morality. In that sense, Marx’s thought is a reversion to the time before Hegel, Smith, or Voltaire.”
In fairness to Marx, he does not claim that he wants to return to a feudal society marked by inherited social status and aristocracy. He is more reactionary than that. The Marxist final fantasy holds that at the end of history, when the state “withers away,” man is liberated from all exploitation and returns to the tribal state in which there is no division of labor, no dichotomy of rights and duties.
Marx’s “social science” was swept into history’s dustbin long ago. What endured was the romantic appeal of Marxism, because that appeal speaks to our tribal minds in ways we struggle to recognize, even though it never stops whispering in our ears.
It is an old conservative habit—one I’ve been guilty of myself—of looking around society and politics, finding things we don’t like or disagree with, and then running through an old trunk of Marxist bric-a-brac to spruce up our objections. It is undeniably true that the influence of Marx, particularly in the academy, remains staggering. Moreover, his indirect influence is as hard to measure as it is extensive. How many novels, plays, and movies have been shaped by Marx or informed by people shaped by Marx? It’s unknowable.
And yet, this is overdone. The truth is that Marx’s ideas were sticky for several reasons. First, they conformed to older, traditional ways of seeing the world—far more than Marxist zealots have ever realized. The idea that there are malevolent forces above and around us, manipulating our lives and exploiting the fruits of our labors, was hardly invented by him. In a sense, it wasn’t invented by anybody. Conspiracy theories are as old as mankind, stretching back to prehistory.
There’s ample reason—with ample research to back it up—to believe that there is a natural and universal human appetite for conspiracy theories. It is a by-product of our adapted ability to detect patterns, particularly patterns that may help us anticipate a threat—and, as Mark van Vugt has written, “the biggest threat facing humans throughout history has been other people, particularly when they teamed up against you.”8
To a very large extent, this is what Marxism is —an extravagant conspiracy theory in which the ruling classes, the industrialists, and/or the Jews arrange affairs for their own benefit and against the interests of the masses. Marx himself was an avid conspiracy theorist, as so many brilliant bohemian misfits tend to be, believing that the English deliberately orchestrated the Irish potato famine to “carry out the agricultural revolution and to thin the population of Ireland down to the proportion satisfactory to the landlords.” He even argued that the Crimean War was a kind of false-flag operation to hide the true nature of Russian-English collusion.
Contemporary political figures on the left and the right routinely employ the language of exploitation and conspiracy. They do so not because they’ve internalized Marx, but because of their own internal psychological architecture. In Rolling Stone, Matt Taibbi, the talented left-wing writer, describes Goldman Sachs (the subject of quite a few conspiracy theories) thus:
The first thing you need to know about Goldman Sachs is that it’s everywhere. The world’s most powerful investment bank is a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money. In fact, the history of the recent financial crisis, which doubles as a history of the rapid decline and fall of the suddenly swindled dry American empire, reads like a Who’s Who of Goldman Sachs graduates.
Marx would be jealous that he didn’t think of the phrase “the great vampire squid.”
Meanwhile, Donald Trump has occasionally traded in the same kind of language, even evoking some ancient anti-Semitic tropes. “Hillary Clinton meets in secret with international banks to plot the destruction of U.S. sovereignty in order to enrich these global financial powers, her special-interest friends, and her donors,” Trump said in one campaign speech. “This election will determine if we are a free nation or whether we have only the illusion of democracy, but are in fact controlled by a small handful of global special interests rigging the system, and our system is rigged.” He added: “Our corrupt political establishment, that is the greatest power behind the efforts at radical globalization and the disenfranchisement of working people. Their financial resources are virtually unlimited, their political resources are unlimited, their media resources are unmatched.”
A second reason Marxism is so successful at fixing itself to the human mind is that it offers—to some—a palatable substitute for the lost certainty of religious faith. Marxism helped to restore certainty and meaning for huge numbers of people who, having lost traditional religion, had not lost their religious instinct. One can see evidence of this in the rhetoric used by Marxist and other socialist revolutionaries who promised to deliver a “Kingdom of Heaven on Earth.”
The 20th-century philosopher Eric Voegelin argued that Enlightenment thinkers like Voltaire had stripped the transcendent from its central place in human affairs. God had been dethroned and “We the People”—and our things—had taken His place. “When God is invisible behind the world,” Voegelin writes, “the contents of the world will become new gods; when the symbols of transcendent religiosity are banned, new symbols develop from the inner-worldly language of science to take their place.”9
The religious views of the Romantic writers and artists Marx was raised on (and whom he had once hoped to emulate) ran the gamut from atheism to heartfelt devotion, but they shared an anger and frustration with the way the new order had banished the richness of faith from the land. “Now we have got the freedom of believing in public nothing but what can be rationally demonstrated,” the writer Johann Heinrich Merck complained. “They have deprived religion of all its sensuous elements, that is, of all its relish. They have carved it up into its parts and reduced it to a skeleton without color and light…. And now it’s put in a jar and nobody wants to taste it.”10
When God became sidelined as the source of ultimate meaning, “the people” became both the new deity and the new messianic force of the new order. In other words, instead of worshipping some unseen force residing in Heaven, people started worshipping themselves. This is what gave nationalism its spiritual power, as the volksgeist, people’s spirit, replaced the Holy Spirit. The tribal instinct to belong to a sacralized group took over. In this light, we can see how romantic nationalism and “globalist” Marxism are closely related. They are both “re-enchantment creeds,” as the philosopher-historian Ernest Gellner put it. They fill up the holes in our souls and give us a sense of belonging and meaning.
For Marx, the inevitable victory of Communism would arrive when the people, collectively, seized their rightful place on the Throne of History.11 The cult of unity found a new home in countless ideologies, each of which determined, in accord with their own dogma, to, in Voegelin’s words, “build the corpus mysticum of the collectivity and bind the members to form the oneness of the body.” Or, to borrow a phrase from Barack Obama, “we are the ones we’ve been waiting for.”
In practice, Marxist doctrine is more alienating and dehumanizing than capitalism will ever be. But in theory, it conforms to the way our minds wish to see the world. There’s a reason why so many populist movements have been so easily herded into Marxism. It’s not that the mobs in Venezuela or Cuba started reading The Eighteenth Brumaire and suddenly became Marxists. The peasants of North Vietnam did not need to read the Critique of the Gotha Program to become convinced that they were being exploited. The angry populace is always already convinced. The people have usually reached the conclusion long ago. They have the faith; what they need is the dogma. They need experts and authority figures—priests!—with ready-made theories about why the masses’ gut feelings were right all along. They don’t need Marx or anybody else to tell them they feel ripped off, disrespected, exploited. They know that already. The story Marxists tell doesn’t have to be true. It has to be affirming. And it has to have a villain. The villain, then and now, is the Jew.
1 Muller, Jerry Z.. The Mind and the Market: Capitalism in Western Thought (p. 5). Knopf Doubleday Publishing Group. Kindle Edition.
2 Muller, Jerry Z. Capitalism and the Jews (pp. 23-24). Princeton University Press. Kindle Edition.
3 Luther’s economic thought, reflected in his “Long Sermon on Usury of 1520” and his tract On Trade and Usury of 1524, was hostile to commerce in general and to international trade in particular, and stricter than the canonists in its condemnation of moneylending. Muller, Jerry Z.. Capitalism and the Jews (p. 26). Princeton University Press. Kindle Edition.
4 Quoted approvingly in Marx, Karl and Engels, Friedrich. “Capitalist Production.” Capital: Critical Analysis of Production, Volume II. Samuel Moore and Edward Aveling, trans. London: Swan Sonnenschein, Lowrey, & Co. 1887. p. 604
5 Sperber, Jonathan. “Introduction.” Karl Marx: A Nineteenth-Century Life. New York: Liverwright Publishing Corporation. 2013. xiii.
6 McCloskey, Deirdre. Bourgeois Dignity: Why Economics Can’t Explain the Modern World. Chicago: University of Chicago Press. p. 142
7 Johnson, Paul. Intellectuals (Kindle Locations 1325-1326). HarperCollins. Kindle Edition.
8 See also: Sunstain, Cass R. and Vermeule, Adrian. “Syposium on Conspiracy Theories: Causes and Cures.” The Journal of Political Philosophy: Volume 17, Number 2, 2009, pp. 202-227. http://www.ask-force.org/web/Discourse/Sunstein-Conspiracy-Theories-2009.pdf
9 Think of the story of the Golden Calf. Moses departs for Mt. Sinai to talk with God and receive the Ten Commandments. No sooner had he left did the Israelites switch their allegiance to false idol, the Golden Calf, treating a worldly inanimate object as their deity. So it is with modern man. Hence, Voegelin’s quip that for the Marxist “Christ the Redeemer is replaced by the steam engine as the promise of the realm to come.”
10 Blanning, Tim. The Romantic Revolution: A History (Modern Library Chronicles Series Book 34) (Kindle Locations 445-450). Random House Publishing Group. Kindle Edition.
11 Marx: “Along with the constant decrease in the number of capitalist magnates, who usurp and monopolize all the advantages of this process of transformation, the mass of misery, oppression, slavery, degradation and exploitation grows; but with this there also grows the revolt of the working class, a class constantly increasing in numbers, and trained, united and organized by the very mechanism of the capitalist process of production.”
Choose your plan and pay nothing for six Weeks!
Review of 'Realism and Democracy' By Elliott Abrams
Then, in 1966, Syrian Baathists—believers in a different transnational unite-all-the-Arabs ideology—overthrew the government in Damascus and lent their support to Palestinian guerrillas in the Jordanian-controlled West Bank to attack Israel. Later that year, a Jordanian-linked counter-coup in Syria failed, and the key figures behind it fled to Jordan. Then, on the eve of the Six-Day War in May 1967, Jordan’s King Hussein signed a mutual-defense pact with Egypt, agreeing to deploy Iraqi troops on Jordanian soil and effectively giving Nasser command and control over Jordan’s own armed forces.
This is just a snapshot of the havoc wreaked on the Middle East by the conceit of pan-Arabism. This history is worth recalling when reading Elliott Abrams’s idealistic yet clearheaded Realism and Democracy: American Foreign Policy After the Arab Spring. One of the book’s key insights is the importance of legitimacy for regimes that rule “not nation-states” but rather “Sykes-Picot states”—the colonial heirlooms of Britain and France created in the wake of the two world wars. At times, these states barely seem to acknowledge, let alone respect, their own sovereignty.
When the spirit of revolution hit the Arab world in 2010, the states with external legitimacy—monarchies such as Saudi Arabia, Jordan, Morocco, Kuwait—survived. Regimes that ruled merely by brute force—Egypt, Yemen, Libya—didn’t. The Bashar al-Assad regime in Syria has only held on thanks to the intervention of Iran and Russia, and it is difficult to argue that there is any such thing as “Syria” anymore. What this all proved was that the “stability” of Arab dictatorships, a central conceit of U.S. foreign policy, was in many cases an illusion.
That is the first hard lesson in pan-Arabism from Abrams, now a senior fellow at the Council on Foreign Relations. The second is this: The extremists who filled the power vacuums in Egypt, Libya, Syria, and other countries led Western analysts to believe that there was an “Islamic exceptionalism” at play that demonstrated Islam’s incompatibility with democracy. Abrams effectively debunks this by showing that the real culprit stymieing the spread of liberty in the Middle East was not Islam but pan-Arabism, which stems from secular roots. He notes one study showing that, in the 30 years between 1973 and 2003, “a non-Arab Muslim-majority country was almost 20 times more likely to be ‘electorally competitive’ than an Arab-majority Muslim country.”
Abrams is thus an optimist on the subject of Islam and democracy—which is heartening, considering his experience and expertise. He worked for legendary cold-warrior Senator Henry “Scoop” Jackson and served as an assistant secretary of state for human rights under Ronald Reagan and later as George W. Bush’s deputy national-security adviser for global democracy strategy. Realism and Democracy is about U.S. policy and the Arab world—but it is also about the nature of participatory politics itself. Its theme is: Ideas have consequences. And what sets Abrams’s book apart is its concrete policy recommendations to put flesh on the bones of those ideas, and bring them to life.
The dreary disintegration of the Arab Spring saw Hosni Mubarak’s regime in Egypt replaced by the Muslim Brotherhood, which after a year was displaced in a military coup. Syria’s civil war has seen about 400,000 killed and millions displaced. Into the vacuum stepped numerous Islamist terror groups. The fall of Muammar Qaddafi in Libya has resulted in total state collapse. Yemen’s civil war bleeds on.
Stability in authoritarian states with little or no legitimacy is a fiction. Communist police states were likely to fall, and the longer they took to do so, the longer the opposition sat in a balled-up rage. That, Abrams notes, is precisely what happened in Egypt. Mubarak’s repression gave the Muslim Brotherhood an advantage once the playing field opened up: The group had decades of organizing under its belt, a coherent raison d’être, and a track record of providing health and education services where the state lagged. No other parties or opposition groups had anything resembling this kind of coordination.
Abrams trenchantly concludes from this that “tyranny in the Arab world is dangerous and should itself be viewed as a form of political extremism that is likely to feed other forms.” Yet even this extremism can be tempered by power, he suggests. In a democracy, Islamist parties will have to compromise and moderate or be voted out. In Tunisia, electorally successful Islamists chose the former, and it stands as a rare success story.
Mohamed Morsi’s Muslim Brotherhood took a different path in Egypt, with parlous results. Its government began pulling up the ladder behind it, closing avenues of political resistance and civic participation. Hamas did the same after winning Palestinian elections in 2006. Abrams thinks that the odds of such a bait-and-switch can be reduced. He quotes the academic Stephen R. Grand, who calls for all political parties “to take an oath of allegiance to the state, to respect the outcome of democratic elections, to abide by the rules of the constitution, and to forswear violence.” If they keep their word, they will open up the political space for non-Islamist parties to get in the game. If they don’t—well, let the Egyptian coup stand as a warning.
Abrams, to his credit, does not avoid the Mesopotamian elephant in the room. The Iraq War has become Exhibit A in the dangers of democracy promotion. This is understandable, but it is misguided. The Bush administration made the decision to decapitate the regime of Saddam Hussein based on national-security calculations, mainly the fear of weapons of mass destruction. Once the decapitation had occurred, the administration could hardly have been expected to replace Saddam with another strongman whose depravities would this time be on America’s conscience. Critics of the war reverse the order here and paint a false portrait.
Here is where Abrams’s book stands out: He provides, in the last two chapters, an accounting of the weaknesses in U.S. policy, including mistakes made by the administration he served, and a series of concrete proposals to show that democracy promotion can be effective without the use of force.
One mistake, according to Abrams, is America’s favoring of civil-society groups over political parties. These groups do much good, generally have strong English-language skills, and are less likely to be tied to the government or ancien régime. But those are also strikes against them. Abrams relates a story told by former U.S. diplomat Princeton Lyman about Nelson Mandela. Nigerian activists asked the South African freedom fighter to support an oil embargo against their own government. Mandela declined because, Lyman says, there was as yet no serious, organized political opposition party: “What Mandela was saying to the Nigerian activists is that, in the absence of political movements dedicated not just to democracy but also to governing when the opportunity arises, social, civic, and economic pressures against tyranny will not suffice.” Without properly focused democracy promotion, other tools to punish repressive regimes will be off the table.
Egypt offers a good example of another principle: Backsliding must be punished. The Bush administration’s pressure on Mubarak over his treatment of opposition figures changed regime behavior in 2005. Yet by the end of Bush’s second term, the pressure had let up and Mubarak’s misbehavior continued, with no consequences from either Bush or his successor, Barack Obama, until it was too late.
That, in turn, leads to another of Abrams’s recommendations: “American diplomacy can be effective only when it is clear that the president and secretary of state are behind whatever diplomatic moves or statements an official in Washington or a U.S. ambassador is making.” This is good advice for the current Oval Office occupant and his advisers. President Trump’s supporters advise critics of his dismissive attitude toward human-rights violations to focus on what the president does, not what he says. But Trump’s refusal to take a hard line against Vladimir Putin and his recent praise of Chinese President Xi Jinping’s move to become president for life undermine lower-level officials’ attempts to encourage reform.
There won’t be democracy without democrats. Pro-democracy education, Abrams advises, can teach freedom-seekers to speak the ennobling language of liberty, which is the crucial first step toward building a culture that prizes it. And in the process, we might do some ennobling ourselves.