How did modern social science influence the Supreme Court's decision declaring segregation in public schools unconstitutional?
When, in May 1954, the Supreme Court held that segregation in public education was unconstitutional, it brought to a head another question that is less immediately explosive but perhaps equally important for the future of our legal system. The nine Supreme Court justices unanimously found that segregation in the public schools implies the inferior status of Negro children and retards their “educational and mental development.” And it added that “this finding is amply supported by modern authority.” In what has now achieved a certain fame as “footnote eleven,” Chief Justice Warren cited the works of eight social scientists as “modern authority.” Here, then, is the crux of the question. To what extent did the Court base its decision on the findings of social science? And if the Court relied on social science to a significant degree, how does this tendency affect the future of both American jurisprudence and American social science?
Among those who deplored as well as among those who welcomed the end of segregation’s legality, there are many who agree that the Court relied mainly on social science in coming to its decision. But not all agree that such reliance is either beneficial or justifiable. Some wonder whether social science is sufficiently “scientific” to provide a firm basis for legal decisions; others ask whether social science may not itself be harmed by having to bear such a heavy burden of responsibility in “practical affairs.”
A little social science can of course be a dangerous thing. Those lawyers (and a very few social scientists) who are disturbed by the Court’s apparent confidence in the findings of social research may have a point—for judges may not be adept at using these findings. It is one thing for a judge to consider the testimony of social scientists called in as experts, just as he may consider the testimony of doctors, ballistics experts, and engineers. But it is quite another thing for a judge himself to play the part of social scientist by trying to decide not only if social science findings are relevant to a given problem, but also what the findings are themselves. And it is also easy for a judge to overestimate the validity of what he thinks are “findings.”
The absurd extent to which such misconceptions can go may be seen in two instances that occurred about a decade ago, when the social sciences in their present form were beginning to exercise an influence on judicial proceedings and rulings. In a case before the New York State Supreme Court, a woman sought to deny her husband, from whom she was legally separated, the right to visit a daughter born to her by artificial insemination at a time when the couple were still united. The judge told her counsel: “If you are successful here, the child will be established as illegitimate. How will that help the child? This court will not lend itself to making any child illegitimate. It would be inhumane, inhuman and contrary to the highest precepts of sociology.” To this judge the “precepts of sociology” were apparently moral judgments, rather than statements about human behavior that all of us interpret according to our own moral conceptions. For him, sociology was a kind of scientific (therefore higher) ethic.
A more serious misapplication of social science in the courts occurred after World War II, when the New York City police, in an effort to restrain juvenile delinquency, launched a campaign to hold parents responsible for their children’s misdeeds. In this move to visit the sins of the children upon the parents, the police were encouraged by many amateur social scientists who drew what seemed to themselves the “obvious” conclusion from “sociological” evidence: that juvenile offenders came from “bad” homes. When the community was faced with the consequences of this brand of “enlightened” law enforcement, it drew back in horror. The confrontation took place in 1947. A judge in the Domestic Relations Court sentenced the mother of a fourteen-year-old delinquent boy to a year in jail. “By your own acts,” he told the mother, “you encouraged delinquency in your child. How could it be different with your way of living? Drink after drink—living in one apartment after another with various men.” The director of the Society for the Prevention of Crime (Edwin Lukas, now head of the National Affairs Department of the American Jewish Committee) pointed out that the mother had come to this country when she was eight years old, was married off at nineteen to a much older man, had two children by the time she was twenty, and had been abandoned by her husband and left to support the children as best she could. Applying the judge’s own reasoning, Mr. Lukas pointed out that “just as children are seen as the products and victims of their environment, so are parents products and victims, too.” If the mother of the boy was the guilty one, how about her mother? The most eloquent verdict on this kind of “sociological” approach came after the mother had served several months of her jail sentence: she was committed to a hospital for the criminally insane. Her attorney commented: “The court sentenced a woman to jail for failing to be responsible for her son’s behavior when actually she was not even legally responsible for her own.”
The usefulness of social science in the courts is only one aspect of the broader question of its role in the solution of what are called “practical” problems, that is in public affairs. The rapid multiplication of social problems (or the increase in the public awareness of them) beginning with the depression of the 1930’s was accompanied by new tendencies in social science that have convinced potential consumers of research that it would be of help to them. Civic agencies, corporations, and government at all levels have been spending increasing amounts of money on social research. The Federal government leads in all this, disbursing over $50 million annually.1
Just as lawyers and constitutional experts are uncertain about the relevance and value of social science for the law, so is the government divided on the usefulness of social research for administration. Opinions in the legislature range from enthusiastic and hopeful support to impatient dismissal of it as a waste of money. Senator McClellan, for example, in 1953 opposed appropriations for studies of Russian and satellite vulnerability: “What do you get, just a lot of professor theories and all that stuff? Is that what you get out of it? To me, that is simply throwing money away, nothing else. If we have not sense enough in the Army and the Navy and the Defense Department and as American citizens to know how to counteract Soviet propaganda without hiring a bunch of college professors to write out a lot of theories, this Defense Department is in one darn bad shape in my opinion.”
This rejection of research in favor of “common sense” probably expresses the private feelings of most Congressmen. But there is an influential group of legislators who doubt the adequacy of the “common sense” approach. In 1952 the Cox committee investigating the foundations remarked: “Few individuals feel themselves qualified to express an expert opinion on nuclear fission or the value of isotopes but most of us will not hesitate to express our opinions on such homely subjects as divorce, the causes for the increase in the cost of living, the psychological effects of segregation, the increase in juvenile delinquency, or the impact of television on the study habits of children. But these and other subjects within the orbit of the social sciences are proper subjects for objective study and analysis under conditions of control which give promise of revealing scientific facts.”
The introduction of social scientific evidence in judicial proceedings goes back further than the recent controversy over its place in the 1954 segregation cases would indicate. For half a century now, law has known the “Brandeis brief,” an early use of social research in the courts. This is one of those innovations which are hailed as revolutionary by specialists but seem tame, if not obvious, to non-specialists.
In 1908 Louis D. Brandeis, not yet on the bench, represented the State of Oregon before the Supreme Court. The constitutionality of Oregon’s law limiting women to an eight-hour work day in factories was being challenged on the ground that it restricted freedom of contract unreasonably and hence violated the “due process clause” of the Fourteenth Amendment. For twenty years the Supreme Court had been declaring social legislation enacted by the states unconstitutional on the ground that, according to the social philosophy of the judges, it was an “unreasonable” restriction of freedom. To save the Oregon law, therefore, Brandeis had to show that the restriction of the working day for women in factories to a maximum of eight hours was a “reasonable” (hence not unconstitutional) means to protect “public health, safety, morals, or welfare.” He submitted a long brief that made only a cursory legal argument, but offered about a hundred pages reviewing American and foreign legislation restricting the working day for women and quoting many authorities on the ill effects of long hours of work.
Although the Court received the Brandeis brief favorably and upheld the Oregon law, Brandeis’s biographer, Alpheus T. Mason, has suggested that the elaborate brief was not the basis of its decision. Instead, says Professor Mason, the court apparently relied on “common knowledge” about women’s physical capacity and the need to protect them through law, “rather than on the knowledge gained from Mr. Brandeis’s brief.” Yet Professor Mason’s point does not tell the whole story. It is true that the Court rested its decision on the ground that the physical vulnerability of women and the importance of their health to the “well-being of the race” were “matters of general knowledge”; therefore the Court did not rely on the social research contained in the Brandeis brief. But it is clear from the decision that the brief did convince the Court about something: that such views about the physical capacity of women were indeed “matters of general knowledge” of which it was entitled to take “judicial cognizance.” Although the Brandeis brief presented facts and opinions which, according to the Court, might not be “technically speaking, authorities,” yet, the Court added, “they are significant of a widespread belief that woman’s physical structure, and the functions she performs in consequence thereof, justify special legislation. . . .” In other words, the Court learned from the Brandeis brief just how widespread was the belief, both among medical experts and the general community, about women’s physical capacity upon which it rested its decision upholding the Oregon law.
Did the testimony of social scientists play a similar role in the 1954 segregation cases nearly fifty years after the introduction of the Brandeis brief? The answer is in dispute, for Chief Justice Warren, in his opinion for the unanimous Court, did not, as I have said, refer directly to the testimony given by social scientists in the trial courts, but mentioned “modern authority” on the effects of segregating Negro children and in a footnote cited eight such authorities.
The segregation cases presented the Court with a new issue growing out of an old problem. The old problem was the admission of Negroes to state-supported schools from which they had previously been barred. Since 1938 the Court had held that a Negro was entitled to admission to state institutions of higher learning for whites if “equal facilities” were not provided in colleges and universities reserved for Negroes. In a series of cases, the Court had given an increasingly stricter interpretation of what constituted “equal facilities,” until in 1950 it declared unanimously that a Negro had to be admitted to the University of Texas Law School because the state’s law school for Negroes was inferior in “those qualities which are incapable of objective measurement but which make for greatness in a law school”—that is, in reputation of the faculty, school tradition and prestige, and influence of the alumni. The new issue before the Court was double-barreled: Would the Court require that Negroes be admitted to white schools below the university level? Would the Court accept the argument advanced by the lawyers for the National Association for the Advancement of Colored People (backed by an impressive number of other organizations) that segregated facilities can never be equal facilities?
In the Federal and state courts in which the cases were argued before they reached the Supreme Court, social scientists were called to the witness stand to testify as experts about the ill effects of segregation on the Negro children’s personalities—the “intangible” factors. (Other educators were called upon for traditional testimony concerning what the Supreme Court called the “tangible” factors of equal school facilities: plant and equipment, teaching staff, library, and services.) Herbert Hill and Jack Greenberg, associated with the NAACP in these cases, have pointed out the advantages of the oral testimony in the Citizen’s Guide to De-Segregation (Beacon, 1955): “Much of the same information” given by the social scientists on the stand, they remark, “could have been culled from books and articles and placed in the briefs. But the live witnesses produced a different effect. They could talk about the specific children and schools in the cases. Educators inspected the schools; social scientists examined some of the children who were plaintiffs. The experts were cross-examined, and their testimony was subject to rebuttal; this gave the defendants [arguing for the legality of segregation] a certain opportunity but it enhanced the persuasiveness of the testimony if it could not be shaken.”
When the cases reached the Supreme Court in 1952, the NAACP appended to its brief a statement signed by thirty-two sociologists, anthropologists, psychologists, and psychiatrists in which two conclusions were emphasized: (1) segregation adversely affects both white and Negro children; (2) desegregation (which presents problems of execution that firmness will overcome) will lead to “more favorable attitudes and friendlier relations between races.” The social scientists directed their comments not to the “moral and legal issues” but to the “factual issues” of the “consequences of segregation” and the “problems of changing from segregated to unsegregated practices.” They were frank to acknowledge that some of these questions are not finally resolved but lie “on the frontiers of scientific knowledge.” They added, however, that all of them were “in agreement that this statement is substantially correct and justified by the evidence, and the differences among us, if any, are of a relatively minor order and would not materially influence the preceding conclusions.”
In his opinion for the unanimous Court, Chief Justice Warren was cautious and brief but clear. At the outset, he stated that the issue was not whether the facilities in the Negro schools were equal to those in the white schools “with respect to buildings, curricula, qualifications and salaries of teachers, and other ‘tangible’ factors.” What was at issue was whether the doctrine of the Plessy case of 1896, that the provision of “separate but equal” facilities in public accommodations did not violate the Fourteenth Amendment, was applicable to public education in 1954. “We must look,” he said, “to the effect of segregation itself on public education.” He then reviewed briefly the importance of public education today in the community and to the individual; in this he relied on common knowledge, without referring to special studies or social science data.
He next turned to the real issue: does segregation mean inferior educational opportunity for Negro children even if the “tangible” facilities are equal? “We believe,” the Court said, “that it does.” In coming to this conclusion, the opinion cited the “intangible considerations” upon which the 1950 law school decision was made. “Such considerations,” it went on, “apply with added force to children in grade and high schools. To separate them from others of similar age and qualifications solely because of their race generates a feeling of inferiority as to their status in the community that may affect their hearts and minds in a way unlikely ever to be undone.” But here Chief Justice Warren felt less sure of general agreement. On the importance of education to the nation and the individual, he could rely confidently on what all of us know and none would deny. On the effects of segregation, however, the community is divided, as evidenced by the segregation laws themselves, the legal arguments in their defense, and the attitudes of many whites in the North as well as in the South. So the Court turned to social science to support its belief about the harmful effects of segregation: “Whatever may have been the extent of the psychological knowledge at the time of Plessy vs. Ferguson [in 1896] this finding is amply supported by modern authority.” Whereupon Justice Warren cited some social scientific studies. In overturning the separate-but-equal doctrine that had prevailed for more than half a century, the Chief Justice was careful to go only as far as necessary: “We conclude that in the field of public education the doctrine of ‘separate but equal’; has no place. Separate educational facilities are inherently unequal.” The ground on which the decision rests is made quite clear in the very next sentence: Negroes are, “by reason of the segregation complained of, deprived of the equal protection of the laws guaranteed by the Fourteenth Amendment.”
The Court seems to have reasoned somewhat as follows: (1) Public education is vital to the nation and the individual—no one would dispute this. (2) Segregation of Negro children impairs their education. Not everyone agrees that this is so, nor among those who do believe it is there agreement on the extent of the harm segregation causes; but those qualified to judge these matters believe that segregated facilities do mean unequal facilities. The Court, too, finds that it does. (3) Inequality in public educational facilities violates the Fourteenth Amendment.
Where does this line of reasoning base itself upon the findings of social science? In the second step. The Court took the position that segregation is inherently unequal—that is, the very act of enforced segregation implies, as the lone dissenter in the Plessy case put it in 1896, that “colored citizens are so inferior and degraded that they cannot be allowed” to commingle with whites in public facilities. But, aware that the nation was not unanimous on this point, the Court sought to buttress its position by referring to what it must have considered the best knowledge on the subject. The Court was not compelled to mention the “modern authority” it did in footnote eleven; it could simply have stated, quite baldly, that it took “judicial notice” of the fact that segregation means inequality, and relied, as it did, on the equal-protection clause of the Fourteenth Amedment. But the Justices chose to take account of research findings in exactly the same way that their predecessors took account of the data in Brandeis’s brief: to buttress their “judicial notice” of the effects of a social situation at the moment of the decision rather than at some earlier time. To outlaw the practice of segregation, the Court relied on a series of precedents for interpreting the Fourteenth Amendment, not on sociology and social psychology. The Court made its own assessment of the meaning of public education and of segregation in our time, (not, as Chief Justice Warren plainly said, in “1868 when the Amendment was adopted, or even . . . 1896 when Plessy vs. Ferguson was written”); though it was probably aided in this assessment by the testimony social scientists gave in the lower courts, to which the Supreme Court obliquely referred but which it could have ignored completely without changing its decision in the slightest.
Although we can only guess the effect the testimony of the social scientists had upon the Court, we can feel more certain about evaluating that testimony itself. Both the oral testimony given in the lower courts, and the studies cited as “modern authority” in Justice Warren’s footnote eleven, are very weak indeed and inspire no great confidence. Very little research, good or bad, has been done on just the point that the social scientists were trying to establish: the harmful effect of enforced segregation upon the personalities of Negro children. Of the seven books and articles mentioned in the footnote, only two deal directly with this issue, and they only review the meager data on it. One of these, Personality in the Making (edited by H. L. Witmer and Ruth Kotinsky, 1952) plainly admits: “There has not been much scientific research on the effects of prejudice and discrimination on personality formation.” Another study cited by the Court reports the opinions of 517 social scientists as to the effects of segregation on personality; 90 per cent said its effects were harmful. Asked on what they based their opinion, whatever it might be, 29 per cent cited their own research. It was odd, as Professor Isidor Chein, who conducted the survey in 1947, pointed out, that so many social scientists should have claimed to have done research on the subject when only a “negligible” amount of material had appeared in print. A survey of the literature just completed by Professor Melvin Tumin (Segregation and Desegregation: A Digest of Recent Research, Anti-Defamation League, 1957), turns up no more than a piece or two of recent research.
It is fortunate that the Supreme Court, although it seems to have been influenced to some degree by the social scientists, did not rest its protection of the rights of minorities on the largely irrelevant books and articles cited in footnote eleven or on the need to establish as a fact that segregation has a harmful personality effect. As Professor Edmond Cahn of the New York University Law School has pointed out, until now we have been entitled to equality under law even if inequality was not harmful. If later decisions hold that the Supreme Court in 1954 rested its opinion on the social scientists’ evidence of damage to personality, then we may reach a point where we shall be entitled to equality under law only when we can show that inequality has been or would be harmful. The point is that segregation itself, irrespective of its consequences, signifies inequality, an illegal, immoral, and scientifically unwarranted proclamation of inferiority. The danger of resting the right to equality under the law upon the ability to prove damage is perversely illustrated in an argument recently advanced by two Southern lawyers (one of them the Attorney General of Georgia). Reminding their readers that the Supreme Court considered that segregation of Negro school children “may affect their hearts and minds in a way unlikely ever to be undone,” the authors comment: “Absent from the opinion was reference to the effect on the hearts and minds of white children and their parents because of enforced commingling with Negro children.” This is the garden path down which the argument about damage to personality leads one. Our right to equality must be protected even if, in someone else’s judgment, it does us no good, or even harms us.
Nonetheless, despite its weak role in the segregation cases, social science can play a useful part in the courts, both to enable judges to interpret legislation in the light of current social needs and to establish facts in dispute. Surprisingly, there has been little cogent analysis by lawyers or social scientists of its part in the segregation cases or in others. Among the more able legal commentators were the late Judge Jerome N. Frank and Professor Cahn. Both have deflated the notion that the Court’s opinion was based on the findings of social science. Both approved the Court’s reliance upon a combination of constitutional provisions and judicial precedent, feeling that social science was far from ready to provide the kind of certainty about human behavior which they considered the law requires of its auxiliary disciplines. “The basic trouble” with social science, Judge Frank said, is that its “generalizations relate to the customs and group beliefs (the mores, the folkways), matters which, especially in a changing modern society, are not readily predictable, because of the numerous elusive and accidental factors. . . .” Professor Cahn asserts: “Among the major impediments continually confronting this science2 are (1) the recurrent lack of agreement on substantive premises, and (2) the recurrent lack of extrinsic, empirical means for checking and verifying inferred results.”
Let us concede these strictures and admit that social science generalizations based upon study of some human beings and groups are not easily applied to others. A simple question remains: do we have more reliable knowledge of human behavior than social science offers? I do not think so.
When they point to the weaknesses of social science, some legal scholars display a misconception about science in general. They seem to want a body of data that is error-proof. There is no such body of data even in the physical sciences. All the sciences make tentative statements of relationships that are frequently upset by new findings or are encompassed by new generalizations. The fact that scientists (social scientists among them) offer their findings modestly, and with the contrary evidence carefully noted, does not detract from the logical superiority of their methods and conclusions. Lawyers, judges, and journalists, for example, are more addicted to pontification and bald assertion, but this does not make their reasoning or their methodology any more reliable. Scientific humility is humility in the face of what remains to be learned and understood; it is not humility before other kinds of approaches or claims. Users of science (especially of social science) are often misled by its humility into believing that there is a better way to learn about the external world or about human behavior. But as Professor Cahn concludes, the issue about social science in the courts is not whether it shall be used but how. “We ought,” he says, “no longer debate the general admissibility of testimony from authentic social-science sources; on the contrary, we ought to welcome and encourage evidence of this kind. Our studies and criticisms should be addressed rather to considerations of weight and materiality.”
Not all lawyers welcome social science in the the manner of Judge Frank or Professor Cahn. As a profession, they have been no less protective of their own domain than the other professions. As Morris Ernst recently pointed out: “Other experts are often better qualified than we—or at least as well—for the human problems which come before us. Yet the legal profession jealously guards its traditional prerogatives against ‘intruders’ from other fields.” Lawyers are still skeptical of testimony from physicians even though the courts have long admitted it to a place of importance. Recently, for example, New York City courts experimented with the appointment, by the judge in personal injury trials, of a physician from a panel established by the New York Academy of Medicine and the New York County Medical Society. Reporting the success of the innovation, a special committee of the New York City Bar Association remarks that, even in the case of so venerable an art and science as medicine, “A few judges and lawyers have been disturbed by the special status occupied by an impartial expert. They feel that . . . he may usurp the functions of judge and jury. Medicine, they argue, is not an exact science. . . .” Nor, we might add, is law. So long as judges must be learned only in law, they cannot help but rely on experts when the law, as it does to an increasing degree nowadays, touches human relations in all their ramifications and complexities, from mental states to the nature of the physical world in which we live.
Yet there is no doubt that the social sciences will be called on more and more to testify in judicial proceedings. Are they up to it? This is hardly the question. No science or art is “up” to all that may be demanded of it. Two fundamental points are clear. First, law shows increasing need of knowledge of human behavior. Second, social science is our best—even if it is not a perfect or even reasonably satisfactory-means of acquiring such knowledge. Not all social scientists are happy about such a prospect. The profession as a whole feels a pressure toward objectivity which many fear would be compromised by a role in “practical” affairs. Doubtless, too, some social scientists fear the possibility of disclosing to public view the inadequacies of their discipline more than they fear a compromise of their “objectivity.” These groups are very likely a minority. Even if they are not, they will have to stand the test. Social scientists must have the courage of their conviction that they are dealing more reliably than anybody else can with human behavior. Without equating “pure” with “applied” science, I suspect that a social science with no relevance or usefulness in “practical” affairs is not likely to be of much use as a “pure” science either. At the turn of the century Justice Holmes pointed the moral both for law and social science when he wrote: “. . . in the law we only occasionally can reach an absolutely final and quantitative determination, because the worth of competing social ends which respectively solicit a judgment for the plaintiff or the defendant cannot be reduced to number and accurately fixed. . . . But it is of the essence of improvement that we should be as accurate as we can.”
1 This amount is three times that spent for similar purposes fifteen or twenty years ago, according to a Brookings Institution report. Social research, however, now represents a much smaller proportion of the government’s total scientific research budget—2 per cent, compared with 24 per cent in 1937. Funds for research in the physical sciences—physics, chemistry, biology, and so on—have increased at a much faster pace. Most of this research, in both the social and the physical sciences, is related to defense needs.
2 Professor Cahn is referring to social psychology, but I am sure he would make the same judgment of all social science.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
The Study of Man: Desegregation, Law, and Social Science
Must-Reads from Magazine
Their coming-and-going polka—now you see ’im, now you don’t—consumed the first 10 days of March. One week Cohn was in the driver’s seat of U.S. economic policy, steering his boss into a comprehensive overhaul of the tax code and preparing him for a huge disgorgement of taxpayer money to repair some nebulous entity called “our crumbling infrastructure.” The next week Cohn had disappeared and in his place at the president’s side Navarro suddenly materialized. With Navarro’s encouragement, the president unexpectedly announced hefty, world-wobbling tariffs on steel and aluminum imports. At first the financial markets tumbled, and nobody in Washington, including the president’s friends, seemed happy. Nobody, that is, except Navarro, whose Cheshire-cat grin quickly became unavoidable on the alphabet-soup channels of cable news. It’s the perfect place for him, front and center, trying to disentangle the conflicting strands of the president’s economic policy. Far more than Cohn, the president’s newest and most powerful economic adviser is a suitable poster boy for Trumpism, whatever that might be.
So where, the capital wondered, did this Navarro fellow come from? (The question So where did this Cohn guy go? barely lasted a news cycle.) Insiders and political obsessives dimly remembered Navarro from Trump’s presidential campaign. With Wilbur Ross, now the secretary of commerce, Navarro wrote the most articulate brief for the Trump economic plan in the months before the election, which by my reckoning occurred roughly 277 years ago. (Ross is also Navarro’s co-conspirator in pushing the steel tariffs. They’re an Odd Couple indeed: Navarro is well-coiffed and tidy and as smooth as a California anchorman, while Ross is what Barney Fife might have looked like if he’d given up his job as Mayberry’s deputy sheriff and gotten a degree in mortuary science.) The Navarro-Ross paper drew predictable skepticism from mainstream economists and their proxies in the press, particularly its eye-popping claim that Trump’s “trade policy reforms” would generate an additional $1.7 trillion in government revenue over the next 10 years.
Navarro is nominally a professor at University of California, Irvine. His ideological pedigree, like the president’s, is that of a mongrel. After a decade securing tenure by writing academic papers (“A Critical Comparison of Utility-type Ratemaking Methodologies in Oil Pipeline Regulation”), he set his attention on politics. In the 1990s, he earned the distinction of losing four political races in six years, all in San Diego or its surrounding suburbs—one for mayor, another for county supervisor, another for city council. He was a Democrat in those days, as Trump was; he campaigned against sprawl and for heavy environmental regulation. In 1996, he ran for Congress as “The Democrat Newt Gingrich Fears Most.” The TV actor Ed Asner filmed a commercial for him. This proved less helpful than hoped when his Republican opponent reminded voters that a few years earlier, Asner had been a chief fundraiser for the Communist guerrillas in El Salvador.
After that defeat, Navarro got the message and retired from politics. He returned to teaching, became an off-and-on-again Republican, and set about writing financial potboilers, mostly on investment strategies for a world increasingly unreceptive to American leadership. One of them, Death by China (2011), purported to describe the slow but inexorable sapping of American wealth and spirit through Chinese devilry. As it happened, this was Donald Trump’s favorite theme as well. From the beginning of his 40-year public career, Trump has stuck to his insistence that someone, in geo-economic terms, is bullying this great country of his. The identity of the bully has varied over time: In the 1980s, it was the Soviets who, following their cataclysmic implosion, gave way to Japan, which was replaced, after its own economic collapse, by America’s neighbors to the north and south, who have been joined, since the end of the last decade, by China. In Death by China, the man, the moment, and the message came together with perfect timing. Trump loved it.
It’s not clear that he read it, however. Trump is a visual learner, as the educational theorists used to say. He will retain more from Fox and Friends as he constructs his hair in the morning than from a half day buried in a stack of white papers from the Department of Labor. When Navarro decided to make a movie of the book, directed by himself, Trump attended a screening and lustily endorsed it. You can see why. Navarro’s use of animation is spare but compelling; the most vivid image shows a dagger of Asiatic design plunging (up to the hilt and beyond!) into the heart of a two-dimensional map of the U.S., causing the country’s blood to spray wildly across the screen, then seep in rivulets around the world. It’s Wes Cravenomics.
Most of the movie, however, is taken up by talking heads. Nearly everyone of these heads is attached to a left-wing Democrat, a socialist, or, in a couple of instances, an anarchist from the Occupy movement. Watched today, Death by China is a reminder of how lonely—how marginal—the anti-China obsession has been. This is not to its discredit; yesterday’s fringe often becomes today’s mainstream, just as today’s consensus is often disproved by the events of tomorrow. Not so long ago, for instance, the establishment catechism declared that economic liberalization and the prosperity it created led inexorably to political liberalization; from free markets, we were told, came free societies. In the last generation, China has put this fantasy to rest. Only the willfully ignorant would deny that the behavior of the Chinese government, at home and abroad, is the work of swine. Even so, the past three presidents have seen China only as a subject for scolding, never retaliation.
And this brings us to another mystery of Trumpism, as Navarro embodies it. Retaliation against China and its bullying trade practices is exactly what Trump has promised as both candidate and president. More than a year into his presidency, with his tariffs on steel and aluminum, he has struck against the bullies at last, just as he vowed to do. And the bullies, we discover, are mostly our friends—Germans, Brazilians, South Koreans, and other partners who sell us their aluminum and steel for less than we can make it ourselves. Accounting for 2 percent of U.S. steel imports, the Chinese are barely scratched in the president’s first great foray in protectionism.
In announcing the tariffs, Trump cited Chinese “dumping,” as if out of habit. Yet Navarro himself seems at a loss to explain why he and his boss have chosen to go after our friends instead of our preeminent adversary in world trade. “China is in many ways the root of the problem for all countries of the world in aluminum and steel,” he told CNN the day after the tariffs were announced. Really? How’s that? “The bigger picture is, China has tremendous overcapacity in both aluminum and steel. So what they do is, they flood the world market, and this trickles down to our shores, and to other countries.”
If that wasn’t confusing enough, we had only to wait three days. By then Navarro was telling other interviewers, “This has nothing to do with China, directly or indirectly.”
This is not the first time Trumpism has shown signs of incoherence. With Peter Navarro at the president’s side, and with Gary Cohn a fading memory, it is unlikely to be the last.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Review of 'Political Tribes' By Amy Chua
Amy Chua has an explanation for what ails us at home and abroad: Elites keep ignoring the primacy of tribalism both in the United States and elsewhere and so are blindsided every time people act in accordance with their group instinct. In Political Tribes, she offers a survey of tribal dynamics around the globe and renders judgments about the ways in which the United States has serially misread us-and-them conflicts. In the book’s final chapters, Chua, a Yale University law professor best known for her parenting polemic Battle Hymn of the Tiger Mother, focuses on the clashing group instincts that now threaten to sunder the American body politic.
As Chua sees it, “our blindness to political tribalism abroad reflects America at both its best and worst.” Because the United States is a nation made up of diverse immigrant populations—a “supergroup”—Americans can sometimes underestimate how hard it is for people in other countries to set aside their religious or ethnic ties and find common national purpose. That’s American ignorance in its most optimistic and benevolent form. But then there’s the more noxious variety: “In some cases, like Vietnam,” she writes, “ethnically blind racism has been part of our obliviousness.”
During the Vietnam War, Chua notes, the United States failed to distinguish between the ethnically homogeneous Vietnamese majority and the Chinese minority who were targets of mass resentment. In Vietnam, national identity was built largely on historical accounts of the courageous heroes who had been repelling Chinese invaders since 111 b.c.e., when China first conquered its neighbor to the south. This defining antipathy toward the Chinese was exacerbated by the fact that Vietnam’s Chinese minority was on average far wealthier and more politically powerful than the ethnic Vietnamese masses. “Yet astonishingly,” writes Chua, “U.S. foreign policy makers during the Cold War were so oblivious to Vietnamese history that they thought Vietnam was China’s pawn—merely ‘a stalking horse for Beijing in Southeast Asia.’”
Throughout the book, Chua captures tribal conflicts in clear and engrossing prose. But as a guide to foreign policy, one gets the sense that her emphasis on tribal ties might not be able to do all the work she expects of it. The first hint comes in her Vietnam analysis. If American ignorance of Chinese–Vietnam tensions is to blame for our having fought and lost the war, what would a better understanding of such things have yielded? She gets to that, sort of. “Could we have supported Ho [Chi Minh] against the French, capitalizing on Vietnam’s historical hostility toward China to keep the Vietnamese within our sphere of influence?” Chua asks. “We’ll never know. Somehow we never saw or took seriously the enmity between Vietnam and China.” It’s hard to see the U.S.’s backing a mass-murdering Communist against a putatively democratic ally as anything but a surreal thought experiment, let alone a lost opportunity.
On Afghanistan, Chua is correct about a number of things. There are indeed long-simmering tensions between Pashtuns, Punjabs, and other tribes in the region. The U.S. did pay insufficient attention to Afghanistan in the decade leading up to 9/11. The Taliban did play on Pashtun aspirations to fuel their rise. But how, exactly, are we to understand our failures in Afghanistan as resulting from ignorance of tribal relations? The Taliban went on to forge a protective agreement with al-Qaeda that had little if anything to do with tribal ties. And it was that relationship that had tragic consequences for the United States.
Not only was Osama bin Laden not Pashtun; he was an Arab millionaire, and his terrorist organization was made up of jihadists from all around the world. If anything, it was Bin Laden’s trans-tribal movement that the U.S. should have been focused on. The Taliban-al-Qaeda alliance was based on pooling resources against perceived common threats, compatible (but not identical) religious notions, and large cash payments from Bin Laden. No American understanding of tribal relations could have interfered with that.
And while an ambitious tribe-savvy counterinsurgency strategy might have gone a long way in helping the U.S.’s war effort, there has never been broad public support for such a commitment. Ultimately, our problems in Afghanistan have less to do with neglecting tribal politics and more to do with general neglect.
In Chua’s chapter on the Iraq War, however, her paradigm aligns more closely with the facts. “Could we have done better if we hadn’t been so blind to tribal politics in Iraq?” she asks. “There’s very good evidence that the answer is yes.” Here Chua offers a concise account of the U.S.’s successful 2007 troop surge. “While the additional U.S. soldiers—sent primarily to Baghdad and Al Anbar Province—were of course a critical factor,” she writes, “the surge succeeded only because it was accompanied by a 180-degree shift in our approach to the local population.”
Chua goes into colorful detail about then colonel H.R. McMaster’s efforts to educate American troops in local Iraqi customs and his decision to position them among the local population in Tal Afar. This won the trust of Iraqis who were forthcoming with critical intelligence. She also covers the work of Col. Sean MacFarland who forged relationships with Sunni sheikhs. Those sheikhs, in turn, convinced their tribespeople to work with U.S. forces and function as a local police force. Finally, Chua explains how Gen. David Petraeus combined the work of McMaster and MacFarland and achieved the miraculous in pacifying Baghdad. In spite of U.S. gains—and the successful navigation of tribes—there was little American popular will to keep Iraq on course and, over the next few years, the country inevitably unraveled.I n writing about life in the United States, Chua is on firmer ground altogether, and her diagnostic powers are impressive. “It turns out that in America, there’s a chasm between the tribal identities of the country’s haves and have-nots,” she writes, “a chasm of the same kind wreaking political havoc in many developing and non-Western countries.” In the U.S., however, there’s a crucial difference to this dynamic, and Chua puts her finger right on it: “In America, it’s the progressive elites who have taken it upon themselves to expose the American Dream as false. This is their form of tribalism.”
She backs up this contention with statistics. Some of the most interesting revelations have to do with the Occupy movement. In actual fact, those who gathered in cities across the country to protest systemic inequality in 2012 were “disproportionately affluent.” In fact, “more than half had incomes of $75,000 or more.” Occupy faded away, as she notes, because it “attracted so few members from the many disadvantaged groups it purported to be fighting for.” Chua puts things in perspective: “Imagine if the suffragette movement hadn’t included large numbers of women, or if the civil-rights movement included very few African Americans, or if the gay-rights movement included very few gays.” America’s poorer classes, for their part, are “deeply patriotic, even if they feel they’re losing the country to distant elites who know nothing about them.”
Chua is perceptive on both the inhabitants of Trump Country and the elites who disdain them. She takes American attitudes toward professional wrestling as emblematic of the split between those who support Donald Trump and those who detest him. Trump is a bona fide hero in the world of pro wrestling; he has participated in “bouts” and was actually inducted into the WWE Hall of Fame in 2013. What WWE fans get from watching wrestling they also get from watching Trump—“showmanship and symbols,” a world held together by enticing false storylines, and, ultimately, “something playfully spectacular.” Those on the academic left, on the other hand, “are fascinated, even obsessed in a horrified way, with the ‘phenomenology’ of watching professional wrestling.” In the book’s most arresting line, Chua writes that “there is now so little interaction, commonality, and intermarriage between rural/heartland/working-class whites and urban/coastal whites that the difference between them is practically what social scientists would consider an ‘ethnic difference.’”
Of course, there’s much today dividing America along racial lines as well. While Americans of color still contend with the legacy of institutional intolerance, “it is simply a fact that ‘diversity’ policies at the most select American universities and in some sectors of the economy have had a disparate adverse impact on whites.” So, both blacks and whites (and most everyone else) feel threatened to some degree. This has sharpened the edge of identity politics on the left and right. In Chua’s reading, these tribal differences will not actually break the country apart. But, she believes, they could fundamentally and irreversibly change “who we are.”
Political Tribes, however, is no doomsday prediction. Despite our clannish resentments, Chua sees, in her daily interactions, people’s willingness to form bonds beyond those of their in-group and a relaxing of tribal ties. What’s needed is for haves and have-nots, whites and blacks, liberals and conservatives to enjoy more meaningful exposure to one another. This pat prescription would come across as criminally sappy if not for the genuinely loving and patriotic way in which Chua writes about our responsibilities as a “supergroup.” “It’s not enough that we view one another as fellow human beings,” she says, “we need to view one another as fellow Americans.” Americans as a higher ontological category than human beings—there’s poetry in that. And a healthy bit of tribalism, too.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Then again, you know what happens when you assume.
“Here is my prediction,” Kristof wrote. “The new paramount leader, Xi Jinping, will spearhead a resurgence of economic reform, and probably some political easing as well. Mao’s body will be hauled out of Tiananmen Square on his watch, and Liu Xiaobo, the Nobel Peace Prize–winning writer, will be released from prison.”
True, Kristof conceded, “I may be wrong entirely.” But, he went on, “my hunch on this return to China, my old home, is that change is coming.”
Five years later, the Chinese economy, while large, is saddled with debt. Analysts and government officials are worried about its real-estate bubble. Despite harsh controls, capital continues to flee China. Nor has there been “some political easing.” On the contrary, repression has worsened. The Great Firewall blocks freedom of speech and inquiry, human-rights advocates are jailed, and the provinces resemble surveillance states out of a Philip K. Dick novel. Mao rests comfortably in his mausoleum. Not only did Liu Xiaobo remain a prisoner, he was also denied medical treatment when he contracted cancer, and he died in captivity in 2017.
As for Xi Jinping, he turned out not to be a reformer but a dictator. Steadily, under the guise of anti-corruption campaigns, Xi decimated alternative centers of power within the Communist Party. He built up a cult of personality around “Xi Jinping thought” and his “Chinese dream” of economic, cultural, and military strength. His preeminence was highlighted in October 2017 when the Politburo declined to name his successor. Then, in March of this year, the Chinese abolished the term limits that have guaranteed rotation in office since the death of Mao. Xi reigns supreme.
Bizarrely, this latest development seems to have come as a surprise to the American press. The headline of Emily Rauhala’s Washington Post article read: “China proposes removal of two-term limit, potentially paving way for President Xi Jinping to stay on.” Potentially? Xi’s accession to emperor-like status, wrote Julie Bogen of Vox, “could destabilize decades of progress toward democracy and instead move China even further toward authoritarianism.” Could? Bogen did not specify which “decades of progress toward democracy” she was talking about, but that is probably because, since 1989, there haven’t been any.
Xi’s assumption of dictatorial powers should not have shocked anyone who has paid the slightest bit of attention to recent Chinese history. The Chinese government, until last month a collective dictatorship, has exercised despotic control over its people since the very founding of the state in 1949. And yet the insatiable desire among media to incorporate news events into a preestablished storyline led reporters to cover the party announcement as a sudden reversal. Why? Because only then would the latest decision of an increasingly embattled and belligerent Chinese leadership fit into the prefabricated narrative that says we are living in an authoritarian moment.
For example, one article in the February 26, 2018, New York Times was headlined, “With Xi’s Power Grab, China Joins New Era of Strongmen.” CNN’s James Griffiths wrote, “While Chinese politics is not remotely democratic in the traditional sense, there are certain checks and balances within the Party system itself, with reformers and conservatives seeing their power and influence waxing and waning over time.” Checks and balances, reformers and conservatives—why, they are just like us, only within the context of a one-party state that ruthlessly brooks no dissent.
Now, we do happen to live in an era when democracy and autocracy are at odds. But China is not joining the “authoritarian trend.” It helped create and promote the trend. Next year, China’s “era of strongmen” will enter its seventh decade. The fundamental nature of the Communist regime in Beijing has not changed during this time.
My suspicion is that journalists were taken aback by Xi’s revelation of his true nature because they, like most Western elites, have bought into the myth of China’s “peaceful rise.” For decades, Americans have been told that China’s economic development and participation in international organizations and markets would lead inevitably to its political liberalization. What James Mann calls “the China fantasy” manifested itself in the leadership of both major political parties and in the pronouncements of the chattering class across the ideological spectrum.
Indeed, not only was the soothing scenario of China as a “responsible stakeholder” on the glide path to democracy widespread, but media figures also admonished Americans for not living up to Chinese standards. “One-party autocracy certainly has its drawbacks,” Tom Friedman conceded in an infamous 2009 column. “But when it is led by a reasonably enlightened group of people, as China is today, it can also have great advantages.” For instance, Friedman went on, “it is not an accident that China is committed to overtaking us in electric cars, solar power, energy efficiency, batteries, nuclear power, and wind power.” The following year, during an episode of Meet the Press, Friedman admitted, “I have fantasized—don’t get me wrong—but what if we could just be China for a day?” Just think of all the electric cars the government could force us to buy.
This attitude toward Chinese Communism as a public-policy exemplar became still more pronounced after Donald Trump was elected president on an “America First” agenda. China’s theft of intellectual property, industrial espionage, harassment and exploitation of Western companies, currency manipulation, mercantilist subsidies and tariffs, chronic pollution, military buildup, and interference in democratic politics and university life did not prevent it from proclaiming itself the defender of globalization and environmentalism.
When Xi visited the Davos World Economic Forum last year, the Economist noted the “fawning reception” that greeted him. The speech he delivered, pledging to uphold the international order that had facilitated his nation’s rise as well as his own, received excellent reviews. On January 15, 2017, Fareed Zakaria said, “In an America-first world, China is filling the vacuum.” A few days later, Charlie Rose told his CBS audience, “It’s almost like China is saying, ‘we are the champions of globalization, not the United States.’” And on January 30, 2017, the New York Times quoted a “Berlin-based private equity fund manager,” who said, “We heard a Chinese president becoming leader of the free world.”
The chorus of praise for China grew louder last spring when Trump announced American withdrawal from an international climate accord. In April 2017, Rick Stengel said on cable television that China is becoming “the global leader on the environment.” On June 8, a CBS reporter said that Xi is “now viewed as the world’s leader on climate change.” On June 19, 2017, on Bloomberg news, Dana Hull said, “China is the leader on climate change, especially when it comes to autos.” Also that month, one NBC anchor asked Senator Mike Lee of Utah, “Are you concerned at all that China may be seen as sort of the global leader when it comes to bringing countries together, more so than the United States?”
Last I checked, Xi Jinping’s China has not excelled at “bringing countries together,” unless—like Australia, Japan, South Korea, and Vietnam—those countries are allying with the United States to balance against China. What instead should concern Senator Lee, and all of us, is an American media filled with people suckered by foreign propaganda that happens to coincide with their political preferences, and who are unable to make elementary distinctions between tyrannical governments and consensual ones.
Choose your plan and pay nothing for six Weeks!
Marx didn’t supplant old ideas about money and commerce; he intensified them
rom the time of antiquity until the Enlightenment, trade and the pursuit of wealth were considered sinful. “In the city that is most finely governed,” Aristotle wrote, “the citizens should not live a vulgar or a merchant’s way of life, for this sort of way of life is ignoble and contrary to virtue.”1 In Plato’s vision of an ideal society (the Republic) the ruling “guardians” would own no property to avoid tearing “the city in pieces by differing about ‘mine’ and ‘not mine.’” He added that “all that relates to retail trade, and merchandise, and the keeping of taverns, is denounced and numbered among dishonourable things.” Only noncitizens would be allowed to indulge in commerce. A citizen who defies the natural order and becomes a merchant should be thrown in jail for “shaming his family.”
At his website humanprogress.org, Marian L. Tupy quotes D.C. Earl of the University of Leeds, who wrote that in Ancient Rome, “all trade was stigmatized as undignified … the word mercator [merchant] appears as almost a term of abuse.” Cicero noted in the first century b.c.e. that retail commerce is sordidus (vile) because merchants “would not make any profit unless they lied constantly.”
Early Christianity expanded this point of view. Jesus himself was clearly hostile to the pursuit of riches. “For where your treasure is,” he proclaimed in his Sermon on the Mount, “there will your heart be also.” And of course he insisted that “it is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God.”
The Catholic Church incorporated this view into its teachings for centuries, holding that economics was zero-sum. “The Fathers of the Church adhered to the classical assumption that since the material wealth of humanity was more or less fixed, the gain of some could only come at a loss to others,” the economic historian Jerry Muller explains in his book The Mind and the Market: Capitalism in Western Thought. As St. Augustine put it, “Si unus non perdit, alter non acquirit”—“If one does not lose, the other does not gain.”
The most evil form of wealth accumulation was the use of money to make money—usury. Lending money at interest was unnatural, in this view, and therefore invidious. “While expertise in exchange is justly blamed since it is not according to nature but involves taking from others,” Aristotle insisted, “usury is most reasonably hated because one’s possessions derive from money itself and not from that for which it was supplied.” In the Christian tradition, the only noble labor was physical labor, and so earning wealth from the manipulation of money was seen as inherently ignoble.
In the somewhat more prosperous and market-driven medieval period, Thomas Aquinas helped make private property and commerce more acceptable, but he did not fundamentally break with the Aristotelian view that trade was suspect and the pursuit of wealth was sinful. The merchant’s life was in conflict with the teachings of Christianity if it led to pride or avarice. “Echoing Aristotle,” Muller writes, “Aquinas reasserted that justice in the distribution of material goods was fulfilled when someone received in proportion to his status, office, and function within the institutions of an existing, structured community. Hence Aquinas decried as covetousness the accumulation of wealth to improve one’s place in the social order.”
In the medieval mind, Jews were seen as a kind of stand-in for mercantile and usurious sinfulness. Living outside the Christian community, but within the borders of Christendom, they were allowed to commit the sin of usury on the grounds that their souls were already forfeit. Pope Nicholas V insisted that it is much better that “this people should perpetrate usury than that Christians should engage in it with one another.”2 The Jews were used as a commercial caste the way the untouchables of India were used as a sanitation caste. As Montesquieu would later observe in the 16th century, “whenever one prohibits a thing that is naturally permitted or necessary, the people who engage in it are regarded as dishonest.” Thus, as Muller has argued, anti-Semitism has its roots in a kind of primitive anti-capitalism.
Early Protestantism did not reject these views. It amplified them.3 Martin Luther despised commerce. “There is on earth no greater enemy of man, after the Devil, than a gripe-money and usurer, for he wants to be God over all men…. Usury is a great, huge monster, like a werewolf …. And since we break on the wheel and behead highwaymen, murderers, and housebreakers, how much more ought we to break on the wheel and kill … hunt down, curse, and behead all usurers!”4
It should therefore come as no surprise that Luther’s views of Jews, the living manifestation of usury in the medieval mind, were just as immodest. In his 1543 treatise On the Jews and Their Lies, he offers a seven-point plan on how to deal with them:
- “First, to set fire to their synagogues or schools .…This is to be done in honor of our Lord and of Christendom, so that God might see that we are Christians …”
- “Second, I advise that their houses also be razed and destroyed.”
- “Third, I advise that all their prayer books and Talmudic writings, in which such idolatry, lies, cursing, and blasphemy are taught, be taken from them.”
- “Fourth, I advise that their rabbis be forbidden to teach henceforth on pain of loss of life and limb… ”
- “Fifth, I advise that safe-conduct on the highways be abolished completely for the Jews. For they have no business in the countryside … ”
- “Sixth, I advise that usury be prohibited to them, and that all cash and treasure of silver and gold be taken from them … ”
- “Seventh, I recommend putting a flail, an ax, a hoe, a spade, a distaff, or a spindle into the hands of young, strong Jews and Jewesses and letting them earn their bread in the sweat of their brow.… But if we are afraid that they might harm us or our wives, children, servants, cattle, etc., … then let us emulate the common sense of other nations such as France, Spain, Bohemia, etc., … then eject them forever from the country … ”
Luther agitated against the Jews throughout Europe, condemning local officials for insufficient anti-Semitism (a word that did not exist at the time and a sentiment that was not necessarily linked to more modern biological racism). His demonization of the Jews was derived from more than anti-capitalism. But his belief that the Jewish spirit of commerce was corrupting of Christianity was nonetheless central to his indictment. He sermonized again and again that it must be cleansed from Christendom, either through conversion, annihilation, or expulsion.
Three centuries later, Karl Marx would blend these ideas together in a noxious stew.
The idea at the center of virtually all of Marx’s economic writing is the labor theory of value. It holds that all of the value of any product can be determined by the number of hours it took for a laborer or laborers to produce it. From the viewpoint of conventional economics—and elementary logic—this is ludicrous. For example, ingenuity, which may not be time-consuming, is nonetheless a major source of value. Surely it cannot be true that someone who works intelligently, and therefore efficiently, provides less value than someone who works stupidly and slowly. (Marx anticipates some of these kinds of critiques with a lot of verbiage about the costs of training and skills.) But the more relevant point is simply this: The determinant of value in an economic sense is not the labor that went into a product but the price the consumer is willing to pay for it. Whether it took an hour or a week to build a mousetrap, the value of the two products is the same to the consumer if the quality is the same.
Marx had philosophical, metaphysical, and tactical reasons for holding fast to the labor theory of value. It was essential to his argument that capitalism—or what we would now call “commerce” plain and simple—was exploitative by its very nature. In Marx, the term “exploitation” takes a number of forms. It is not merely evocative of child laborers working in horrid conditions; it covers virtually all profits. If all value is captured by labor, any “surplus value” collected by the owners of capital is by definition exploitative. The businessman who risks his own money to build and staff an innovative factory is not adding value; rather, he is subtracting value from the workers. Indeed, the money he used to buy the land and the materials is really just “dead labor.” For Marx, there was an essentially fixed amount of “labor-power” in society, and extracting profit from it was akin to strip-mining a natural resource. Slavery and wage-labor were different forms of the same exploitation because both involved extracting the common resource. In fact, while Marx despised slavery, he thought wage-labor was only a tiny improvement because wage-labor reduced costs for capitalists in that they were not required to feed or clothe wage laborers.
Because Marx preached revolution, we are inclined to consider him a revolutionary. He was not. None of this was a radical step forward in economic or political thinking. It was, rather, a reaffirmation of the disdain of commerce that starts with Plato and Aristotle and found new footing in Christianity. As Jerry Muller (to whom I am obviously very indebted) writes:
To a degree rarely appreciated, [Marx] merely recast the traditional Christian stigmatization of moneymaking into a new vocabulary and reiterated the ancient suspicion against those who used money to make money. In his concept of capitalism as “exploitation” Marx returned to the very old idea that money is fundamentally unproductive, that only those who live by the sweat of their brow truly produce, and that therefore not only interest, but profit itself, is always ill-gotten.
In his book Karl Marx: A Nineteenth-Century Life, Jonathan Sperber suggests that “Marx is more usefully understood as a backward-looking figure, who took the circumstances of the first half of the nineteenth century and projected them into the future, than as a surefooted and foresighted interpreter of historical trends.”5
Marx was a classic bohemian who resented the fact that he spent his whole life living off the generosity of, first, his parents and then his collaborator Friedrich Engels. He loathed the way “the system” required selling out to the demands of the market and a career. The frustrated poet turned to the embryonic language of social science to express his angry barbaric yawp at The Man. “His critique of the stultifying effects of labor in a capitalist society,” Muller writes, “is a direct continuation of the Romantic conception of the self and its place in society.”
In other words, Marx was a romantic, not a scientist. Romanticism emerged as a rebellion against the Enlightenment, taking many forms—from romantic poetry to romantic nationalism. But central to all its forms was the belief that modern, commercial, rational life is inauthentic and alienating, and cuts us off from our true natures.
As Rousseau, widely seen as the first romantic, explained in his Discourse on the Moral Effects of the Arts and Sciences, modernity—specifically the culture of commerce and science—was oppressive. The baubles of the Enlightenment were mere “garlands of flowers” that concealed “the chains which weigh [men] down” and led people to “love their own slavery.”
This is a better context for understanding Marx’s and Engels’s hatred of the division of labor and the division of rights and duties. Their baseline assumption, like Rousseau’s, is that primitive man lived a freer and more authentic life before the rise of private property and capitalism. “Within the tribe there is as yet no difference between rights and duties,” Engels writes in Origins of the Family, Private Property, and the State. “The question whether participation in public affairs, in blood revenge or atonement, is a right or a duty, does not exist for the Indian; it would seem to him just as absurd as the question whether it was a right or a duty to sleep, eat, or hunt. A division of the tribe or of the gens into different classes was equally impossible.”
For Marx, then, the Jew might as well be the real culprit who told Eve to bite the apple. For the triumph of the Jew and the triumph of money led to the alienation of man. And in truth, the term “alienation” is little more than modern-sounding shorthand for exile from Eden. The division of labor encourages individuality, alienates us from the collective, fosters specialization and egoism, and dethrones the sanctity of the tribe. “Money is the jealous god of Israel, in face of which no other god may exist,” Marx writes. “Money degrades all the gods of man—and turns them into commodities. Money is the universal self-established value of all things. It has, therefore, robbed the whole world—both the world of men and nature—of its specific value. Money is the estranged essence of man’s work and man’s existence, and this alien essence dominates him, and he worships it.”
Marx’s muse was not analytical reason, but resentment. That is what fueled his false consciousness. To understand this fully, we should look at how that most ancient and eternal resentment—Jew-hatred—informed his worldview.
The atheist son of a Jewish convert to Lutheranism and the grandson of a rabbi, Karl Marx hated capitalism in no small part because he hated Jews. According to Marx and Engels, Jewish values placed the acquisition of money above everything else. Marx writes in his infamous essay “On the Jewish Question”:
Let us consider the actual, worldly Jew—not the Sabbath Jew … but the everyday Jew.
Let us not look for the secret of the Jew in his religion, but let us look for the secret of his religion in the real Jew.
What is the secular basis of Judaism? Practical need, self-interest. What is the worldly religion of the Jew? Huckstering. What is his worldly God? Money [Emphasis in original]
The spread of capitalism, therefore, represented a kind of conquest for Jewish values. The Jew—at least the one who set up shop in Marx’s head—makes his money from money. He adds no value. Worse, the Jews considered themselves to be outside the organic social order, Marx complained, but then again that is what capitalism encourages—individual independence from the body politic and the selfish (in Marx’s mind) pursuit of individual success or happiness. For Marx, individualism was a kind of heresy because it meant violating the sacred bond of the community. Private property empowered individuals to live as individuals “without regard to other men,” as Marx put it.
This is the essence of Marx’s view of alienation. Marx believed that people were free, creative beings but were chained to their role as laborers in the industrial machine. The division of labor inherent to capitalist society was alienating and inauthentic, pulling us out of the communitarian natural General Will. The Jew was both an emblem of this alienation and a primary author of it:
The Jew has emancipated himself in a Jewish manner, not only because he has acquired financial power, but also because, through him and also apart from him, money has become a world power and the practical Jewish spirit has become the practical spirit of the Christian nations. The Jews have emancipated themselves insofar as the Christians have become Jews. [Emphasis in original]
He adds, “The god of the Jews has become secularized and has become the god of the world. The bill of exchange is the real god of the Jew. His god is only an illusory bill of exchange.” And he concludes: “In the final analysis, the emancipation of the Jews is the emancipation of mankind from Judaism.” [Emphasis in original]
In The Holy Family, written with Engels, he argues that the most pressing imperative is to transcend “the Jewishness of bourgeois society, the inhumanity of present existence, which finds its highest embodiment in the system of money.” [Emphasis in original]
In his “Theories of Surplus Value,” he praises Luther’s indictment of usury. Luther “has really caught the character of old-fashioned usury, and that of capital as a whole.” Marx and Engels insist that the capitalist ruling classes, whether or not they claim to be Jewish, are nonetheless Jewish in spirit. “In their description of the confrontation of capital and labor, Marx and Engels resurrected the traditional critique of usury,” Muller observes. Or, as Deirdre McCloskey notes, “the history that Marx thought he perceived went with his erroneous logic that capitalism—drawing on an anticommercial theme as old as commerce—just is the same thing as greed.”6 Paul Johnson is pithier: Marx’s “explanation of what was wrong with the world was a combination of student-café anti-Semitism and Rousseau.”7
For Marx, capital and the Jew are different faces of the same monster: “The capitalist knows that all commodities—however shabby they may look or bad they may smell—are in faith and in fact money, internally circumcised Jews, and in addition magical means by which to make more money out of money.”
Marx’s writing, particularly on surplus value, is drenched with references to capital as parasitic and vampiric: “Capital is dead labor which, vampire-like, lives only by sucking living labor, and lives the more, the more labor it sucks. The time during which the worker works is the time during which the capitalist consumes the labor-power he has bought from him.” The constant allusions to the eternal wickedness of the Jew combined with his constant references to blood make it hard to avoid concluding that Marx had simply updated the blood libel and applied it to his own atheistic doctrine. His writing is replete with references to the “bloodsucking” nature of capitalism. He likens both Jews and capitalists (the same thing in his mind) to life-draining exploiters of the proletariat.
Marx writes how the extension of the workday into the night “only slightly quenches the vampire thirst for the living blood of labor,” resulting in the fact that “the vampire will not let go ‘while there remains a single muscle, sinew or drop of blood to be exploited.’” As Mark Neocleous of Brunel University documents in his brilliant essay, “The Political Economy of the Dead: Marx’s Vampires,” the images of blood and bloodsucking capital in Das Kapital are even more prominent motifs: “Capital ‘sucks up the worker’s value-creating power’ and is dripping with blood. Lacemaking institutions exploiting children are described as ‘blood-sucking,’ while U.S. capital is said to be financed by the ‘capitalized blood of children.’ The appropriation of labor is described as the ‘life-blood of capitalism,’ while the state is said to have here and there interposed itself ‘as a barrier to the transformation of children’s blood into capital.’”
Marx’s vision of exploitative, Jewish, bloodsucking capital was an expression of romantic superstition and tribal hatred. Borrowing from the medieval tradition of both Catholics as well as Luther himself, not to mention a certain folkloric poetic tradition, Marx invented a modern-sounding “scientific” theory that was in fact reactionary in every sense of the word. “If Marx’s vision was forward-looking, its premises were curiously archaic,” Muller writes. “As in the civic republican and Christian traditions, self-interest is the enemy of social cohesion and of morality. In that sense, Marx’s thought is a reversion to the time before Hegel, Smith, or Voltaire.”
In fairness to Marx, he does not claim that he wants to return to a feudal society marked by inherited social status and aristocracy. He is more reactionary than that. The Marxist final fantasy holds that at the end of history, when the state “withers away,” man is liberated from all exploitation and returns to the tribal state in which there is no division of labor, no dichotomy of rights and duties.
Marx’s “social science” was swept into history’s dustbin long ago. What endured was the romantic appeal of Marxism, because that appeal speaks to our tribal minds in ways we struggle to recognize, even though it never stops whispering in our ears.
It is an old conservative habit—one I’ve been guilty of myself—of looking around society and politics, finding things we don’t like or disagree with, and then running through an old trunk of Marxist bric-a-brac to spruce up our objections. It is undeniably true that the influence of Marx, particularly in the academy, remains staggering. Moreover, his indirect influence is as hard to measure as it is extensive. How many novels, plays, and movies have been shaped by Marx or informed by people shaped by Marx? It’s unknowable.
And yet, this is overdone. The truth is that Marx’s ideas were sticky for several reasons. First, they conformed to older, traditional ways of seeing the world—far more than Marxist zealots have ever realized. The idea that there are malevolent forces above and around us, manipulating our lives and exploiting the fruits of our labors, was hardly invented by him. In a sense, it wasn’t invented by anybody. Conspiracy theories are as old as mankind, stretching back to prehistory.
There’s ample reason—with ample research to back it up—to believe that there is a natural and universal human appetite for conspiracy theories. It is a by-product of our adapted ability to detect patterns, particularly patterns that may help us anticipate a threat—and, as Mark van Vugt has written, “the biggest threat facing humans throughout history has been other people, particularly when they teamed up against you.”8
To a very large extent, this is what Marxism is —an extravagant conspiracy theory in which the ruling classes, the industrialists, and/or the Jews arrange affairs for their own benefit and against the interests of the masses. Marx himself was an avid conspiracy theorist, as so many brilliant bohemian misfits tend to be, believing that the English deliberately orchestrated the Irish potato famine to “carry out the agricultural revolution and to thin the population of Ireland down to the proportion satisfactory to the landlords.” He even argued that the Crimean War was a kind of false-flag operation to hide the true nature of Russian-English collusion.
Contemporary political figures on the left and the right routinely employ the language of exploitation and conspiracy. They do so not because they’ve internalized Marx, but because of their own internal psychological architecture. In Rolling Stone, Matt Taibbi, the talented left-wing writer, describes Goldman Sachs (the subject of quite a few conspiracy theories) thus:
The first thing you need to know about Goldman Sachs is that it’s everywhere. The world’s most powerful investment bank is a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money. In fact, the history of the recent financial crisis, which doubles as a history of the rapid decline and fall of the suddenly swindled dry American empire, reads like a Who’s Who of Goldman Sachs graduates.
Marx would be jealous that he didn’t think of the phrase “the great vampire squid.”
Meanwhile, Donald Trump has occasionally traded in the same kind of language, even evoking some ancient anti-Semitic tropes. “Hillary Clinton meets in secret with international banks to plot the destruction of U.S. sovereignty in order to enrich these global financial powers, her special-interest friends, and her donors,” Trump said in one campaign speech. “This election will determine if we are a free nation or whether we have only the illusion of democracy, but are in fact controlled by a small handful of global special interests rigging the system, and our system is rigged.” He added: “Our corrupt political establishment, that is the greatest power behind the efforts at radical globalization and the disenfranchisement of working people. Their financial resources are virtually unlimited, their political resources are unlimited, their media resources are unmatched.”
A second reason Marxism is so successful at fixing itself to the human mind is that it offers—to some—a palatable substitute for the lost certainty of religious faith. Marxism helped to restore certainty and meaning for huge numbers of people who, having lost traditional religion, had not lost their religious instinct. One can see evidence of this in the rhetoric used by Marxist and other socialist revolutionaries who promised to deliver a “Kingdom of Heaven on Earth.”
The 20th-century philosopher Eric Voegelin argued that Enlightenment thinkers like Voltaire had stripped the transcendent from its central place in human affairs. God had been dethroned and “We the People”—and our things—had taken His place. “When God is invisible behind the world,” Voegelin writes, “the contents of the world will become new gods; when the symbols of transcendent religiosity are banned, new symbols develop from the inner-worldly language of science to take their place.”9
The religious views of the Romantic writers and artists Marx was raised on (and whom he had once hoped to emulate) ran the gamut from atheism to heartfelt devotion, but they shared an anger and frustration with the way the new order had banished the richness of faith from the land. “Now we have got the freedom of believing in public nothing but what can be rationally demonstrated,” the writer Johann Heinrich Merck complained. “They have deprived religion of all its sensuous elements, that is, of all its relish. They have carved it up into its parts and reduced it to a skeleton without color and light…. And now it’s put in a jar and nobody wants to taste it.”10
When God became sidelined as the source of ultimate meaning, “the people” became both the new deity and the new messianic force of the new order. In other words, instead of worshipping some unseen force residing in Heaven, people started worshipping themselves. This is what gave nationalism its spiritual power, as the volksgeist, people’s spirit, replaced the Holy Spirit. The tribal instinct to belong to a sacralized group took over. In this light, we can see how romantic nationalism and “globalist” Marxism are closely related. They are both “re-enchantment creeds,” as the philosopher-historian Ernest Gellner put it. They fill up the holes in our souls and give us a sense of belonging and meaning.
For Marx, the inevitable victory of Communism would arrive when the people, collectively, seized their rightful place on the Throne of History.11 The cult of unity found a new home in countless ideologies, each of which determined, in accord with their own dogma, to, in Voegelin’s words, “build the corpus mysticum of the collectivity and bind the members to form the oneness of the body.” Or, to borrow a phrase from Barack Obama, “we are the ones we’ve been waiting for.”
In practice, Marxist doctrine is more alienating and dehumanizing than capitalism will ever be. But in theory, it conforms to the way our minds wish to see the world. There’s a reason why so many populist movements have been so easily herded into Marxism. It’s not that the mobs in Venezuela or Cuba started reading The Eighteenth Brumaire and suddenly became Marxists. The peasants of North Vietnam did not need to read the Critique of the Gotha Program to become convinced that they were being exploited. The angry populace is always already convinced. The people have usually reached the conclusion long ago. They have the faith; what they need is the dogma. They need experts and authority figures—priests!—with ready-made theories about why the masses’ gut feelings were right all along. They don’t need Marx or anybody else to tell them they feel ripped off, disrespected, exploited. They know that already. The story Marxists tell doesn’t have to be true. It has to be affirming. And it has to have a villain. The villain, then and now, is the Jew.
1 Muller, Jerry Z.. The Mind and the Market: Capitalism in Western Thought (p. 5). Knopf Doubleday Publishing Group. Kindle Edition.
2 Muller, Jerry Z. Capitalism and the Jews (pp. 23-24). Princeton University Press. Kindle Edition.
3 Luther’s economic thought, reflected in his “Long Sermon on Usury of 1520” and his tract On Trade and Usury of 1524, was hostile to commerce in general and to international trade in particular, and stricter than the canonists in its condemnation of moneylending. Muller, Jerry Z.. Capitalism and the Jews (p. 26). Princeton University Press. Kindle Edition.
4 Quoted approvingly in Marx, Karl and Engels, Friedrich. “Capitalist Production.” Capital: Critical Analysis of Production, Volume II. Samuel Moore and Edward Aveling, trans. London: Swan Sonnenschein, Lowrey, & Co. 1887. p. 604
5 Sperber, Jonathan. “Introduction.” Karl Marx: A Nineteenth-Century Life. New York: Liverwright Publishing Corporation. 2013. xiii.
6 McCloskey, Deirdre. Bourgeois Dignity: Why Economics Can’t Explain the Modern World. Chicago: University of Chicago Press. p. 142
7 Johnson, Paul. Intellectuals (Kindle Locations 1325-1326). HarperCollins. Kindle Edition.
8 See also: Sunstain, Cass R. and Vermeule, Adrian. “Syposium on Conspiracy Theories: Causes and Cures.” The Journal of Political Philosophy: Volume 17, Number 2, 2009, pp. 202-227. http://www.ask-force.org/web/Discourse/Sunstein-Conspiracy-Theories-2009.pdf
9 Think of the story of the Golden Calf. Moses departs for Mt. Sinai to talk with God and receive the Ten Commandments. No sooner had he left did the Israelites switch their allegiance to false idol, the Golden Calf, treating a worldly inanimate object as their deity. So it is with modern man. Hence, Voegelin’s quip that for the Marxist “Christ the Redeemer is replaced by the steam engine as the promise of the realm to come.”
10 Blanning, Tim. The Romantic Revolution: A History (Modern Library Chronicles Series Book 34) (Kindle Locations 445-450). Random House Publishing Group. Kindle Edition.
11 Marx: “Along with the constant decrease in the number of capitalist magnates, who usurp and monopolize all the advantages of this process of transformation, the mass of misery, oppression, slavery, degradation and exploitation grows; but with this there also grows the revolt of the working class, a class constantly increasing in numbers, and trained, united and organized by the very mechanism of the capitalist process of production.”
Choose your plan and pay nothing for six Weeks!
Review of 'Realism and Democracy' By Elliott Abrams
Then, in 1966, Syrian Baathists—believers in a different transnational unite-all-the-Arabs ideology—overthrew the government in Damascus and lent their support to Palestinian guerrillas in the Jordanian-controlled West Bank to attack Israel. Later that year, a Jordanian-linked counter-coup in Syria failed, and the key figures behind it fled to Jordan. Then, on the eve of the Six-Day War in May 1967, Jordan’s King Hussein signed a mutual-defense pact with Egypt, agreeing to deploy Iraqi troops on Jordanian soil and effectively giving Nasser command and control over Jordan’s own armed forces.
This is just a snapshot of the havoc wreaked on the Middle East by the conceit of pan-Arabism. This history is worth recalling when reading Elliott Abrams’s idealistic yet clearheaded Realism and Democracy: American Foreign Policy After the Arab Spring. One of the book’s key insights is the importance of legitimacy for regimes that rule “not nation-states” but rather “Sykes-Picot states”—the colonial heirlooms of Britain and France created in the wake of the two world wars. At times, these states barely seem to acknowledge, let alone respect, their own sovereignty.
When the spirit of revolution hit the Arab world in 2010, the states with external legitimacy—monarchies such as Saudi Arabia, Jordan, Morocco, Kuwait—survived. Regimes that ruled merely by brute force—Egypt, Yemen, Libya—didn’t. The Bashar al-Assad regime in Syria has only held on thanks to the intervention of Iran and Russia, and it is difficult to argue that there is any such thing as “Syria” anymore. What this all proved was that the “stability” of Arab dictatorships, a central conceit of U.S. foreign policy, was in many cases an illusion.
That is the first hard lesson in pan-Arabism from Abrams, now a senior fellow at the Council on Foreign Relations. The second is this: The extremists who filled the power vacuums in Egypt, Libya, Syria, and other countries led Western analysts to believe that there was an “Islamic exceptionalism” at play that demonstrated Islam’s incompatibility with democracy. Abrams effectively debunks this by showing that the real culprit stymieing the spread of liberty in the Middle East was not Islam but pan-Arabism, which stems from secular roots. He notes one study showing that, in the 30 years between 1973 and 2003, “a non-Arab Muslim-majority country was almost 20 times more likely to be ‘electorally competitive’ than an Arab-majority Muslim country.”
Abrams is thus an optimist on the subject of Islam and democracy—which is heartening, considering his experience and expertise. He worked for legendary cold-warrior Senator Henry “Scoop” Jackson and served as an assistant secretary of state for human rights under Ronald Reagan and later as George W. Bush’s deputy national-security adviser for global democracy strategy. Realism and Democracy is about U.S. policy and the Arab world—but it is also about the nature of participatory politics itself. Its theme is: Ideas have consequences. And what sets Abrams’s book apart is its concrete policy recommendations to put flesh on the bones of those ideas, and bring them to life.
The dreary disintegration of the Arab Spring saw Hosni Mubarak’s regime in Egypt replaced by the Muslim Brotherhood, which after a year was displaced in a military coup. Syria’s civil war has seen about 400,000 killed and millions displaced. Into the vacuum stepped numerous Islamist terror groups. The fall of Muammar Qaddafi in Libya has resulted in total state collapse. Yemen’s civil war bleeds on.
Stability in authoritarian states with little or no legitimacy is a fiction. Communist police states were likely to fall, and the longer they took to do so, the longer the opposition sat in a balled-up rage. That, Abrams notes, is precisely what happened in Egypt. Mubarak’s repression gave the Muslim Brotherhood an advantage once the playing field opened up: The group had decades of organizing under its belt, a coherent raison d’être, and a track record of providing health and education services where the state lagged. No other parties or opposition groups had anything resembling this kind of coordination.
Abrams trenchantly concludes from this that “tyranny in the Arab world is dangerous and should itself be viewed as a form of political extremism that is likely to feed other forms.” Yet even this extremism can be tempered by power, he suggests. In a democracy, Islamist parties will have to compromise and moderate or be voted out. In Tunisia, electorally successful Islamists chose the former, and it stands as a rare success story.
Mohamed Morsi’s Muslim Brotherhood took a different path in Egypt, with parlous results. Its government began pulling up the ladder behind it, closing avenues of political resistance and civic participation. Hamas did the same after winning Palestinian elections in 2006. Abrams thinks that the odds of such a bait-and-switch can be reduced. He quotes the academic Stephen R. Grand, who calls for all political parties “to take an oath of allegiance to the state, to respect the outcome of democratic elections, to abide by the rules of the constitution, and to forswear violence.” If they keep their word, they will open up the political space for non-Islamist parties to get in the game. If they don’t—well, let the Egyptian coup stand as a warning.
Abrams, to his credit, does not avoid the Mesopotamian elephant in the room. The Iraq War has become Exhibit A in the dangers of democracy promotion. This is understandable, but it is misguided. The Bush administration made the decision to decapitate the regime of Saddam Hussein based on national-security calculations, mainly the fear of weapons of mass destruction. Once the decapitation had occurred, the administration could hardly have been expected to replace Saddam with another strongman whose depravities would this time be on America’s conscience. Critics of the war reverse the order here and paint a false portrait.
Here is where Abrams’s book stands out: He provides, in the last two chapters, an accounting of the weaknesses in U.S. policy, including mistakes made by the administration he served, and a series of concrete proposals to show that democracy promotion can be effective without the use of force.
One mistake, according to Abrams, is America’s favoring of civil-society groups over political parties. These groups do much good, generally have strong English-language skills, and are less likely to be tied to the government or ancien régime. But those are also strikes against them. Abrams relates a story told by former U.S. diplomat Princeton Lyman about Nelson Mandela. Nigerian activists asked the South African freedom fighter to support an oil embargo against their own government. Mandela declined because, Lyman says, there was as yet no serious, organized political opposition party: “What Mandela was saying to the Nigerian activists is that, in the absence of political movements dedicated not just to democracy but also to governing when the opportunity arises, social, civic, and economic pressures against tyranny will not suffice.” Without properly focused democracy promotion, other tools to punish repressive regimes will be off the table.
Egypt offers a good example of another principle: Backsliding must be punished. The Bush administration’s pressure on Mubarak over his treatment of opposition figures changed regime behavior in 2005. Yet by the end of Bush’s second term, the pressure had let up and Mubarak’s misbehavior continued, with no consequences from either Bush or his successor, Barack Obama, until it was too late.
That, in turn, leads to another of Abrams’s recommendations: “American diplomacy can be effective only when it is clear that the president and secretary of state are behind whatever diplomatic moves or statements an official in Washington or a U.S. ambassador is making.” This is good advice for the current Oval Office occupant and his advisers. President Trump’s supporters advise critics of his dismissive attitude toward human-rights violations to focus on what the president does, not what he says. But Trump’s refusal to take a hard line against Vladimir Putin and his recent praise of Chinese President Xi Jinping’s move to become president for life undermine lower-level officials’ attempts to encourage reform.
There won’t be democracy without democrats. Pro-democracy education, Abrams advises, can teach freedom-seekers to speak the ennobling language of liberty, which is the crucial first step toward building a culture that prizes it. And in the process, we might do some ennobling ourselves.