In November 1989, Richard Herrnstein and I agreed to collaborate on a book that, five years later, became The Bell…
In November 1989, Richard Herrnstein and I agreed to collaborate on a book that, five years later, became The Bell Curve. It is a book about events at the two ends of the distribution of intelligence that are profoundly affecting American life. At one extreme, transformations in higher education, occupations, and federal power are creating a cognitive elite of growing wealth and influence. At the other extreme, transformations in occupations and social norms are creating a cognitive underclass. “Pressures from these contrasting movements at the opposite ends of society put terrific stress on the entire structure,” we write in the preface, and we spend another 550 pages of main text and 300 pages of supplementary material explaining what we mean, and what we see as the implications for America’s future.
The Bell Curve was released by the Free Press early in October 1994, a few weeks after Richard Herrnstein’s death. The initial reaction was encouraging. Acting on Herrnstein’s suggestion, the American Enterprise Institute (AEI) held a small conference of academics and journalists from various points on the political spectrum soon after the book’s publication. The conference went well, with brisk exchanges about a book on which people had differing opinions but which they discussed over the course of two days as a serious and careful work of scholarship. Two weeks after the conference, Malcolm Browne’s thoughtful review appeared in the New York Times Book Review, as did Peter Brimelow’s long and favorable article in Forbes—still the best published synopsis of The Bell Curve.
Then came the avalanche. It seems likely that The Bell Curve will be one of the most written-about and talked-about works of social science since the Kinsey Report 50 years ago. Most of the comment has been virulently hostile. The book is said to be the flimsiest kind of pseudo-science. Designed to promote a radical political agenda. A racist screed. Methodologically pathetic. Tainted by the work of neo-Nazis.
“Never,” my AEI colleague Michael Ledeen observes, “has such a moderate book attracted such an immoderate response.” This is the central irony connected with the reaction to The Bell Curve. For if any one generalization can be made about a work as long and diverse as The Bell Curve, it is that the book is relentlessly moderate—in its language, its claims, its science. It is filled with “on the one hand, . . . on the other hand” discussions of the evidence, presentations of competing explanations, cautions that certain issues are still under debate, and encouragement of other scholars to explore unanswered questions that go beyond the scope of our own work. The statistical analysis is standard and straightforward.
Why then the hysteria? The obvious answer is race, the looming backdrop to all discussion of social policy in the United States. Ever since the first wave of attacks on the book, I have had an image of The Bell Curve as a sort of literary Rorschach test. I do not know how to explain the extraordinary discrepancy between what The Bell Curve actually says about race and what most commentators have said that the book says, except as the result of some sort of psychological projection onto our text.
Other factors are at work as well. Michael Novak (who has written favorably about The Bell Curve) and Thomas Sowell (who has his criticisms of the book) have pointed out in similar terms that the Left has invested everything in a few core beliefs about society as the cause of problems, government as the solution, and the manipulability of the environment for reaching the goal of equality. For the Left, as Novak puts it, The Bell Curve’s,
message cannot be true, because much more is at stake than a particular set of arguments from psychological science. A this-worldly eschatological hope is at stake. The sin attributed to Herrnstein and Murray is theological: they destroy hope.
I am sure Novak and Sowell are on the right track. The underlying reasons for the reaction to The Bell Curve will turn out to be significant in their own right, revealing much about the intellectual temper of our era. But perspective on those reasons must wait for some years. Let me make a more limited prediction: when the Sturm und Drang has subsided, nothing important in The Bell Curve will have been overturned. I say this not because Herrnstein and I were especially far-sighted, but because our conclusions are so cautiously phrased and our findings anchored so securely in the middle of the scientific road.
In the meantime I want to present my own assessment of where the debate stands. The problem is how to do it within a reasonable space and how to avoid being overtaken by events. A first wave of reviews and commentaries in the major media appeared between October 1994 and January of this year. A second wave, consisting of reviews in the academic journals, is on the way. I have already seen manuscript copies of some of these reviews, often highly technical, that will be published over the course of the next year.
The volume of all this material reaches many hundreds of pages. To comment in detail on even a single one of the major reviews would require an article the length of this one. I will use this space instead to present a general proposition about The Bell Curve, and to illustrate it with examples.
My proposition is that the critics of The Bell Curve are going to produce the very effects that their attacks have been intended to avert. I am not here referring to the book’s popularity with the reading public (it spent fifteen weeks on the New York Times bestseller list), although it seems true that the descriptions of The Bell Curve as an angry, racist polemic have led people in bookstores to pick it up to see what the fuss is about. The pages to which they turn are nothing like what they expect, their curiosity is piqued, and some of them buy it.
But the unintended consequences I have in mind go far beyond the sales that the attacks have stimulated. The attacks are also likely to affect intellectual trends. I foresee a three-stage process.
In the first stage, a critic approaches The Bell Curve absolutely certain that it is wrong. He feels no need to be judicious or to explore our evidence in good faith. He seizes upon the arguments that come to hand to make his point and publishes them, with the invective and dismissiveness that seem to be obligatory for a Bell Curve critic.
In the second stage, the attack draws other scholars to look at the issue. Many of them share the critic’s initial assumption that The Bell Curve is wrong. But they nonetheless start to look at evidence they would not have looked at otherwise. They discover that the data are interesting. Some of them back off nervously, but others are curious. They look farther. And it turns out that there is much more out there than Herrnstein and I try to claim.
In stage three, these scholars start to produce new material on the topics that had come under attack in the first place. I doubt that many will choose to defend The Bell Curve, but they will build on its foundation and ultimately do far more damage to the critics’ “eschatological hope” than The Bell Curve itself did.
I will give four examples of these unintended outcomes, drawing from the attacks on the “pseudo-science” of a general-intelligence factor; on the link between genes and race differences in IQ; on the power of the statistical evidence; and on our pessimistic assessment of society’s current attempts to raise IQ through outside interventions.
Much of the attack on The Bell Curve’s, science has been mounted not against anything in the book itself but against the psychometric tradition on which it is based. Specifically, Herrnstein and I accept that there is such a thing as a general factor of cognitive ability on which human beings differ: the famous g.
Ever since the late 1960’s, when IQ became a pariah in the world of ideas, this has been a politically-incorrect position to take. In the early 1980’s, a book by Stephen Jay Gould, The Mismeasure of Man, cemented the discrediting of g among liberals outside the scientific community. His portrait of psychometrics as a pseudo-science pursued by charlatans was swallowed uncritically and enthusiastically by the elite media, as documented by Mark Snyderman and Stanley Roth-man in The IQ Controversy: The Media and Public Policy (1988).
A central thesis of The Mismeasure of Man was that g is nothing more than a statistical artifact. Gould based his denial of a general mental factor on a series of claims about factor analysis, the statistical method for identifying g.
In a review of The Bell Curve in the New Yorker, Gould resurrects the same arguments. Echoing The Mismeasure of Man, he writes: “g cannot have inherent reality . . . for it emerges in one form of mathematical representation for correlations among tests and disappears (or greatly attenuates) in other forms, which are entirely equivalent in amount of information explained.” He continues: “The fact that Herrnstein and Murray barely mention the factor-analytic argument forms a central indictment of The Bell Curve and is an illustration of its vacuousness.” Where, Gould asks, is the evidence that g “captures a real property in the head”?
The reason that we “barely mention the factor-analytic argument” against the existence of g is that it has little scholarly standing. Gould’s statistical indictment of g was refuted in various scientific quarters soon after the appearance of The Mismeasure of Man, and research into g proceeded without a noticeable blip.1
To see what this particular fight is about, a little background is essential. One of the earliest findings about mental tests was that the results of different tests of apparently different mental skills were positively correlated. Charles Spearman, the British founding father of modern psychometrics, was the first to hypothesize that they were correlated because each was tapping into a common construct—the general mental ability he then labeled g. Factor analysis was the method he used to extract this general factor that accounted for the inter-correlations among subtests.
Another pioneering psychometrician, L.L. Thurstone, who in the 1930’s became Spearman’s great antagonist by demonstrating how factor analysis need not yield a dominant general factor, is the hero of Gould’s story. Gould is correct in stating that there are alternative methods with the same overall power to account for the correlations among the tests. But he is wrong when he implies that by using an alternative method, an analyst can get rid of g. As Richard Herrnstein liked to say, “You can make g hide, but you can’t make it go away.”2
Hence the frustration among psychometricians who have tried to make it go away. After applying the particular factor-analytic method that prevented g from emerging, they had nowhere to take the results. If they labeled their independent factors as distinct mental skills and developed a research agenda based on them, they got crushed by critics who could demonstrate that their results were more elegantly explained by g. Indeed, g not only explained more variance than any other factor, it typically explained three times as much variance as all other factors combined.
But one need not rely only on statistical validation of g. By now there is also a growing body of evidence that links g (and IQ scores more generally) with neurophysiological functioning.3 An even larger body of evidence, covered in The Bell Curve, demonstrates g’s value for predicting academic achievement and job performance.
Gould’s position, then, has been thoroughly discredited among scholars, however dominant it remains in the media. Had he kept quiet about The Bell Curve or attacked it on other grounds, his view might have continued to hold sway there. But when he repeated the same arguments in his New Yorker review—which I am told has been triumphantly circulated by nonpsychologists as the canonical refutation of The Bell Curve—he accomplished something that Herrnstein and I could not have done: he made scholars who know what the evidence shows angry enough to go public.
By and large, scholars in the field of intelligence are reclusive—the experiences of people like Arthur Jensen, Hans Eysenck, and Richard Herrnstein himself taught them that the consequences of being visible can be extremely punishing—and many of them were additionally disinclined to jump to the defense of a book co-authored by someone with my reputation as a right-winger. But Gould and, less visibly, his Harvard colleague Howard Gardner in a review of The Bell Curve in American Prospect, were saying things that were palpably wrong about a topic of deep importance to professionals in the field.
Some of these professionals responded with outraged letters to the New Yorker (none was printed). Then came a statement signed by 52 scholars and published in the Wall Street Journal in which all the main scientific findings of The Bell Curve were endorsed (without any explicit mention of the book or its critics). I also hear second-hand of incidents in which reporters have called scholars about “this pseudo-science g business” and received an answer they did not expect. The effects of the backlash are still taking shape, but the media may finally be getting the message. The big unreported story about the study of intelligence in the last decade is the remarkable resilience and importance of g.
I come now to the second example of how the attacks on The Bell Curve are likely to have unintended consequences: the determination of the critics to focus on race and genes, even though The Bell Curve does not.
The Bell Curve draws three important conclusions about intelligence and race: (1) All races are represented across the range of intelligence, from lowest to highest. (2) American blacks and whites continue to have different mean scores on mental tests, with the difference varying from test to test but usually about one standard deviation in magnitude—about fifteen IQ points. “One standard deviation” means roughly that the average black American scores at the sixteenth percentile of the white distribution. (3) Mental-test scores are generally as predictive of academic and job performance for blacks as for other ethnic groups. Insofar as the tests are biased at all, they tend to overpredict, not underpredict, black performance.
These facts are useful in the quest to understand why (for example) occupational and wage differences separate blacks and whites, or why aggressive affirmative action has produced academic apartheid in our universities. More generally, Herrnstein and I write that a broad range of American social issues cannot be interpreted without understanding the ways in which intelligence plays a role that is often, and wrongly, conflated with the role of race. When it comes to government policy, there was in our minds just one authentic implication: return as quickly as possible to the cornerstone of the American ideal that people are to be treated as individuals, not as members of groups.
The furor over The Bell Curve and race has barely touched on these core points. Instead, the critics have been obsessed—no hyperbole here—with genes, trying to stamp out any consideration of the possibility that race differences have a genetic component.
For the record, what we said about genes, IQ, and race in the book is that a legitimate scientific debate is under way about the relationship of genes to race differences in intelligence; that it is scientifically prudent at this point to assume that both environment and genes are involved, in unknown proportions4; and, most importantly, that people are getting far too excited about the whole issue. Genetically-caused differences are not as fearful, nor environmentally-caused differences as benign, as many think. What matters is not the source but the existence of group differences, and their intractability (for whatever reasons).
Six months into my post-Bell Curve life, I have concluded that Herrnstein and I were prematurely right on this point. Certainly we were right empirically when we observed that the public at large is fascinated by the possibility of genetic differences, and that the intellectual elites have been “almost hysterically in denial about that possibility,” as we put it in the book. I think we were also right in trying to dampen that fascination. But listening to some of my most loyal friends who insist that I must be disingenuous when I continue to say that the genetic question is not a big deal, I have to conclude that we failed to make our case persuasively (on pp. 311-15 of The Bell Curve).
Yet the critics, in insisting that the issue of genes really is a big deal, are once again going to produce the very effect they want to avert. In this instance, they have based their attacks on the premise that a full, fair look at the data will make the issue go away. None appears to have recognized that Herrnstein and I did not make nearly as aggressive a case for genetic differences as the evidence permits.
The most abundant source of data that we downplayed is in the work of J. Philippe Rushton, a Canadian psychologist who since 1985 has been publishing increasingly detailed material to support his theory that the three races he labels Negroid, Caucasoid, and Mongoloid vary not just in intelligence but in a wide variety of characteristics. We put our brief discussion of Rushton in an appendix. The critics of The Bell Curve are putting him on the front page, often outrageously caricaturing his work.5 The trouble with this strategy is that Rushton is a serious scholar who has assembled serious data. The attacks on The Bell Curve ensure that those data will get attention.
A related example is the charge that The Bell Curve is based on “tainted sources.” Charles Lane introduced this theme with an article in the New Republic and then a much longer one in the New York Review of Books. In the latter piece, he proclaimed that “No fewer than seventeen researchers cited in the bibliography of The Bell Curve have contributed to Mankind Quarterly, a notorious journal of ‘racial history’ founded, and funded, by men who believe in the genetic superiority of the white race.” Lane also discovered that we cited thirteen scholars who had received grants from the Pioneer Fund, established and run (he alleged) by men who were Nazi sympathizers, eugenicists, and advocates of white racial superiority. Leon Kamin, a vociferous critic of IQ in all its manifestations, took up the same argument at length in his review of The Bell Curve in Scientific American.
Never mind that The Bell Curve draws its evidence from more than 1,000 sources. Never mind that among the scholars in Lane’s short list are some of the most respected psychologists of our time, and that the “tainted sources” consist overwhelmingly of articles that were published in respected and refereed journals. Never mind that the relationship between the founder of the Pioneer Fund and today’s Pioneer Fund is roughly analogous to the relationship between Henry Ford and today’s Ford Foundation. The real effect of Lane and Kamin’s work will be to focus academic attention on the main substantive issue they discuss relative to our “tainted sources,” African IQ.
The topic of African IQ is a tiny piece of The Bell Curve: a three-paragraph section in chapter 13 intended to address a hypothesis Herrnstein and I heard frequently, that the test scores of American blacks have been depressed by the experience of slavery. We briefly summarize the literature indicating that African blacks in fact have lower test scores than American blacks.
Lane and Kamin assault this conclusion ferociously. We make a soft target—since we say so little about African IQ, it is easy for Lane and Kamin to point to the many technical difficulties of knowing exactly what is going on. But in The Bell Curve we also omit many more details making a strong case that African blacks have extraordinarily low scores on standardized mental tests, including ones especially designed for illiterate non-Western subjects. Lane and Kamin want this literature to be weak and racist. It is not, and it bears importantly, if inconclusively, on possible racial genetic differences.
When the story of African IQ is eventually untangled, the safest bet is that the roles of nutrition, education, culture, and genes in the development of cognitive functioning will turn out to be complex and intertwined. In other words, I still think Herrnstein and I were right, if prematurely: it is possible to live with the truth about genes and race, whatever it may be, without changing one’s mind about how a liberal society should function. But whether we were right or wrong, the violent reaction is making sure that the full range of data will be brought to public attention.
The third line of attack on The Bell Curve that I predict will have an unintended outcome is the attempt to dismiss the statistical power of the book’s results.
Perhaps the most important section of The Bell Curve is Part II, “Cognitive Classes and Social Behavior.” It describes the relationship of IQ to poverty, school-dropout rates, unemployment, divorce, illegitimacy, welfare, parenting, crime, and citizenship. To avoid the complications associated with race, it does all this for a sample of whites, using the National Longitudinal Study of Youth.
The eight chapters in Part II deal with questions like: “What role does IQ play in determining whether a woman has a baby out of wedlock?” Or: “What are the comparative roles of socioeconomic disadvantage and IQ in determining whether a youngster grows up to be poor as an adult?” These are fascinating questions. But you will have a hard time figuring out from the published COMMENTARY on The Bell Curve that such questions were even asked, let alone what the answers were.
Instead, the main line of attack has been that there is really no need to pay any attention to those chapters, because Herrnstein and Murray confuse correlation with causation; because IQ really does not explain much of the variance anyway; and because the authors’ measure of socioeconomic background is in any case deficient. On all three counts, the critics are setting up a reexamination of the existing technical literature on social problems that will be intellectually embarrassing to them in the end.
First, regarding correlation and causation, here, boiled down, is what we say in the introduction to Part II: The nonexperimental social sciences cannot demonstrate unequivocal causality. In trying to explain such things as poverty, illegitimacy, and crime, we will use statistics to show what independent role is left for IQ after taking a person’s age, socioeconomic background, and education into account. When there are other obvious explanations—family structure, say—we will take them into account as well. Apart from the statistics, we will describe in common-sense terms what the nature of the causal link might be—why, for example, a poor young woman of low intelligence might be more likely to have a baby out of wedlock than a poor young woman of high intelligence. At the end of this exercise, repeated in similar form for each of the eight chapters in Part II, there will still be unanswered questions, and we will point to many of those unanswered questions ourselves. But the reader will know more than he knew before, and the way will be opened for further explorations by our colleagues.
The statistical method we use throughout is the basic technique for discussing causation in nonexperimental situations: regression analysis, usually with only three independent variables. We interpret the results according to accepted practice. To enable readers to check for themselves, we include the printout of all the results in Appendix 4.
The assault on this modest analysis has been led by Leon Kamin in Scientific American. There he argues that the role of IQ cannot be disentangled from socioeconomic background; he suggests that in our database the children of laborers have such uniformly low IQ scores that no one can possibly tell whether the low IQ or the disadvantaged background is to blame for the higher rates of crime, unemployment, and illegitimacy that afflict such youngsters. “The significant question,” Kamin writes, “is, why don’t the children of laborers acquire the skills that are tapped by IQ tests?”
My answer to his significant question is: “Often, they do acquire such skills,” which is what makes the data so interesting. In America, bright children of laborers tend to do quite well in life, despite their humble origins. Conversely, dull children from privileged homes tend to do poorly, despite all the help their parents lavish on them.
Herrnstein and I contend that such patterns point to causation. This is indeed an inference—a sensible inference.
We approached the correlation/causation tangle in other sensible ways as well. Consider the vexing case of education. People with high IQ’s tend to spend many years in school; people with low IQ’s tend to leave. Does the IQ cause the years of education, or the years of education the IQ?
For various technical reasons, simply entering education as an additional independent variable is unwise. So instead we defined two subsamples, each with the same amount of education—one of adults who had completed exactly twelve years of school and obtained a high-school diploma, no more and no less; the other of adults who had completed exactly sixteen years of school and obtained a bachelor’s degree, no more and no less. For each topic, we accompanied the analysis of the entire sample with separate analyses of the high-school and college samples. Thus the reader could take a look at the independent effect of IQ for people with identical education.
Our procedure has irritated a number of academic critics (notably James Heckman and Arthur Goldberger) who grumble that the state of the art permits much more. Yes, it does, and in the book we say how much we look forward to watching our colleagues apply those more sophisticated techniques to the unanswered questions. But more sophisticated modeling techniques would also have opened a wide variety of technical problems that we wanted to avoid. The procedure we chose gave an excellent means of bounding the independent effects of education, and that was our purpose.
But let us say a critic grants the existence of independent relationships between IQ and social outcomes after holding other plausible causes constant. How important are these “independent relationships”? Hardly at all, says Stephen Jay Gould: The Bell Curve can safely be dismissed because IQ explains so little about the social outcomes in question—just a few percent of the variance, in the statistician’s jargon.
Here is the truth: the relationships between IQ and social behaviors that we present in The Bell Curve are not only “significant” in the standard statistical sense of that phrase, they are powerful in a substantive sense, often much more powerful than the relationships linking social behaviors with the usual suspects (education, social status, affluence, ethnicity). In fact, Herrnstein and I actually understate the strength of the statistical record in The Bell Curve. The story is complex, but worth recounting because it tells so much about the academic response to The Bell Curve.
In ordinary multiple-regression analysis, “independent variables,” the hypothesized causes, are related to a “dependent variable,” the hypothesized effect. Two statistics are of special interest. The first is the set of regression coefficients, one for each independent variable, which tell you the magnitude of the effect each independent variable has on the dependent variable after taking the role of all the other independent variables into account. Each coefficient has a standard error, which may be used to determine whether the coefficient is statistically significant (i.e., unlikely to have been produced by chance). The second statistic of special interest is the square of the multiple correlation, written as R2 (pronounced “r square”), that tells you how much of the variance in the dependent variable is explained by all the independent variables taken together.
One of the early topics about multiple regression that graduate students study is the different uses of regression coefficients and R2. If you have a coefficient with a large value and small standard error, it is typically the statistic of main interpretive importance, while R2 is of secondary and sometimes trivial importance. Such is the case with the kind of analysis in The Bell Curve, for reasons we explain in Appendix 4.
In all this, we treat our data as our colleagues around the country treat regression results every day. There is nothing controversial here—as evidenced by the fact that none of the quantitative social scientists who reviewed this part of our manuscript before publication raised a question about our methods.
But that is not the end of the story. Herrnstein and I make reference to the R2s in Appendix 4 as if they represent “explained variance”—and thereby we commit a technical error, falsely understating the overall explanatory power of our statistics. In logistic regression analysis—the particular type we use throughout Part II—the statistic labeled R2 is an ersatz and unsatisfactory attempt to express the model’s goodness-of-fit. Most statisticians to whom I have talked since say we should have ignored it altogether. Stephen Jay Gould, and others who are making the same criticism he does, have fallen into the same error.
It would be nice if a few respected professors would publicly point out that, whatever else one might think about The Bell Curve, the criticisms of the book’s small R2s are wrong. But this is unlikely to happen. Probably the allegation will quietly fade away, as the academics who know the true story discreetly impart the news to those who do not.
The unfounded criticisms of the statistics in The Bell Curve that I have discussed so far will merely cause embarrassment among a few who both understand the issues and have the decency to be embarrassed. The real potential for backfire in the statistical critique of The Bell Curve comes from the attack on our use of socioeconomic status (SES).
Measures of SES are a staple in the social sciences. Leaf through the dozens of technical articles in sociology and economics dealing with issues of success and failure in American life, and you will frequently find a measure of SES as part of the analysis. A major purpose of The Bell Curve was to add IQ to SES as an explanatory variable. To avoid controversy, we deliberately constructed an SES index that uses the same elements everybody else uses: income, occupation, and education. We did not have an a-priori reason for weighting any of these more heavily than the others, so we converted them to what are called “standard scores” and added them up to get our index—all of which would ordinarily have caused no comment.
But when it comes to The Bell Curve, a standard SES index suddenly becomes problematic. James Heckman notes ominously that we do not have income data for a large part of the sample. Arthur Goldberger looks suspiciously on the idea of standardizing the variables. Leon Kamin hypothesizes that probably the self-reports of income, education, and occupation are exaggerated to a degree that falsely produces the relationships we report.
My response to such criticisms is, fine, let us test out these potential problems. Compare the results for the subsamples with and without income data. Do not standardize the variables; create some other scales and use some other method of combining them. Examine the validity of the self-report data. Examine what happens when the constituent variables are entered separately instead of as an index.
As scholars are supposed to do, Herrnstein and I checked out these and many other possibilities—the results reported in The Bell Curve were triangulated in numbing detail over the years we worked on the book—and we knew what the critics who bothered to retrace our steps would discover: that there is no way to construct a measure of socioeconomic background using the accepted constituent variables that makes much difference in the independent role of IQ. In the jargon, our measure of SES is robust, and as valid as everyone else’s has been.
But there’s the rub: how valid has everyone else’s been? Until The Bell Curve came along, measures of SES similar to ours were used without a second thought. Now, suddenly, they are to be questioned. I doubt whether the profession will be able to confine the questioning to just The Bell Curve. What Herrnstein and I have done, in effect, is to throw down a challenge: if you don’t like the way IQ dominates this thing we call “socioeconomic status” in producing important social outcomes, come up with another means of measuring the environment that produces results you like better.
Such measures can probably be developed—but they will not be ones that the critics of The Bell Curve will like. Suppose, for example, that one can create a good measure of “the degree of presence and competency of a father in the raising of a female child.” That might have a large independent effect on the girl’s chances of giving birth to a baby out of wedlock, whatever her IQ. Suppose that one can create a good measure of “the degree to which a young male is raised in an environment where high moral standards are enforced consistently and firmly.” Again, I can imagine this having a major effect on the likelihood of his becoming a criminal, independently of lQ.
But the same measures that compete with the importance of IQ are going to make starkly clear something that The Bell Curve has already suggested: the kinds of economic and social disadvantages that liberals have traditionally treated as decisive are comparatively unimportant. It may sound like an issue that concerns only the social scientists. Far from it. If I were to nominate the biggest sleeper effect to emerge from The Bell Curve debate, it would be the collapse of SES as a way of interpreting social problems. The rationale for liberal social policy cannot easily do without it.
Raising the question of policy brings us to the last of my four examples of the potential backfire effect of attacks on The Bell Curve—the malleability of IQ. These attacks focused on Chapter 17, “Raising Cognitive Ability,” which chronicles the record of attempts to raise IQ through better nutrition, prenatal care, infant intervention, and preschool and in-school programs. The cries of protest here have been almost as loud as those directed at our chapter on race, and for the reason that Michael Novak identified: by arguing that no easy methods for raising IQ exist, we “destroy hope,” or at least the kind of hope that drives many of the educational and preschool interventions for today’s disadvantaged youth.
We do express hope, actually. Because the environment plays a significant role (40 percent is our ball-park estimate) in determining intelligence—a point The Bell Curve states clearly and often—we say that sooner or later researchers ought to be able to figure out where the levers are. We urge that steps be taken to hasten the day when such knowledge becomes available.
But in examining the current state of knowledge, we also urge realism. Speaking of the most popular idea, intensive intervention for preschoolers, we conclude that “we and everyone else are far from knowing whether, let alone how, any of these projects have increased intelligence.” We also predict that “many ostensibly successful projects will be cited as plain and indisputable evidence that we are willfully refusing to see the light.”
This prediction has been borne out. Thus, the psychologist Richard Nisbett, writing in The Bell Curve Wars,6 a compendium of attacks on our book, accuses us of being “strangely selective” in our reports about the effects of intervention, and wonders if we were “unaware of the very large literature that exists on the topic of early intervention.”
The “very large literature” of which we were unaware? The only study Nisbett mentions that we do not is one published in Pediatrics in 1992 which he describes as showing a nine-point IQ advantage at age three for participants in the intervention. Nisbett neglects to acknowledge the unreliability of IQ measures at age three. More decisively, Nisbett is apparently unaware that a follow-up of the same project was published in 1994, when the children were, at age five, old enough for IQ scores to begin to become interpretable. The results? The experimental group had an advantage of just 2.5 points on one measure of IQ and two-tenths of a point on another—both differences being substantively trivial and statistically insignificant.7 In other words, the only study in “the very large literature” that we missed does not contradict our conclusion that such interventions have provided promising leads but no more.
I will make two broader statements. First, in the critiques to date, no one has pointed to a credible study containing evidence of significant, long-term effects on cognitive functioning that we do not consider in The Bell Curve. Second, our account of the record to date is, if anything, generous. The two major intensive interventions for raising the IQ of children at high risk of mental retardation—Project Milwaukee and the Abecedarian Project—have come under intense methodological criticism in the technical literature. We allude to the controversy in the book, but in neither case is the evidence so clear that we could come down hard on the “no-effect” conclusion, and so we do not. If we err, it is in the direction of giving more credit to the interventions than is warranted.
But just as we predicted, many others are nominating “programs that work” that we mysteriously failed to consider. And I am sure that some of them do work, for goals other than raising IQ. We would be the last to suggest that education cannot be made better, or that the socialization of children cannot be improved. But in The Bell Curve we talk about a particular goal: improving the cognitive functioning of human beings over the long term. On that score, the record remains as Herrnstein and I describe it: yes, it can be done, but at present only in modest amounts for most children, usually temporarily, and inconsistently.
In this instance, I have reason to hope that the unintended effect of the attacks on The Bell Curve will be to crystallize a debate that has long needed crystallizing. The cry that “Herrnstein and Murray are too pessimistic” is going to force a great many claims to be laid on the table for examination. Thus, Howard Gardner’s review takes us to task for not citing Lisbeth Schorr’s book, Within Our Reach. I would be delighted to join in a rigorous examination of the programs Schorr describes, and see whether we find among them hard evidence of long-term improvement in cognitive functioning. Let us bring up all the other nominees for inspection as well. In short, let us use the furor over The Bell Curve finally to come to grips with how difficult it is, given the current state of knowledge, for outside interventions to make much difference in the environmental factors that nurture cognitive development.
If outside interventions are not promising, what about the more general phenomenon we label the “Flynn effect” (after the political scientist James Flynn, who has done the most to bring it to public attention), whereby IQ scores have been rising secularly throughout the world since at least the 1930’s? As Thomas Sowell has argued in the American Spectator, the Flynn effect gives reason to conclude that intelligence is malleable after all. Herrnstein and I allude to that possibility without expressing much optimism about it. Moreover, even if the rise in IQ scores could be taken at face value, we would still not know how to intervene so as to manipulate it. In our view (as in Flynn’s), it seems likely that most of the increase in IQ scores over time represents something besides gains in cognitive functioning. But what that something is remains unclear, and this issue is still wide open.
A few weeks after The Bell Curve appeared, a reporter remarked to me that the real message of the book is “Get serious.” I resisted at first, but I now think he had a point.
We never quite say it in so many words, but the book’s subtext is that America’s discussion of social policy since the 1960’s has been carried on in a never-never land where human beings are easily changed and society can eventually become a Lake Wobegon where everyone is above average. The Bell Curve does indeed imply that it is time to get serious about how best to accommodate the huge and often intractable individual differences that shape human society.
This is a counsel not of despair but of realism—including realistic hope. An individual’s g may not be as elastic as one would prefer, but the inventiveness of the species seems to have few bounds. In The Bell Curve, we are matter-of-fact about the limits facing low-IQ individuals in a postindustrial economy, but we also celebrate the capacity of people everywhere in the normal range on the bell curve to live morally autonomous, satisfying lives, if only the system will let them. Accepting the message of The Bell Curve does not mean giving up on improving social policy, it means thinking anew about how progress is to be achieved—and even more fundamentally, thinking anew about how “progress” is to be defined.
The verdict on the influence of The Bell Curve on policy is many years away. For now, the book may have another useful role to play that we did not anticipate. The attacks on it have often read like an unintentional confirmation of our view of the “cognitive elite” as a new caste, complete with high priests, dogmas, heresies, and apostates. They have revealed the extent to which the social science that deals in public policy has in the latter part of the 20th century become self-censored and riddled with taboos—in a word, corrupt. Only the most profound, anguished, and divisive reexamination can change that situation, and it has to be done within the profession. If The Bell Curve achieves nothing else, I will be satisfied if it helps get such a reexamination going.
1 For a survey of the contrasting receptions of The Mismeasure of Man accorded by the press and by the scientific community, see Bernard Davis's “Neo-Lysenkoism, IQ, and the Press” (Public Interest, Fall 1983).
2 For those who want to pursue the technical issues, I recommend John B. Carroll's recent book, Human Cognitive Abilities: A Survey of Factor-Analytic Studies (Cambridge University Press, 1993). Carroll, former director of the L.L. Thurstone Psychometric Laboratory, points out that Thurstone himself came to accept the notion of a general factor in his later years.
3 For examples, see A.R. Jensen, “The g Beyond Factor Analysis,” in R.R. Ronning, J.A. Glover, J.C. Conoley, and J.C. Witt (eds.), The Influence of Cognitive Psychology on Testing, or B. Bower, “Images of Intellect: Brain Scans May Colorize Intelligence,” Science News (October 8, 1994).
4 Intelligence is known to be substantially heritable in human beings as a species, but this does not mean that group differences are also heritable. Despite our explicit treatment of the issue, it is perhaps the single most widespread source of misstatement about The Bell Curve.
5 For Rushton's argument and evidence, see J. Philippe Rushton, Race, Evolution, and Behavior: A Life History Perspective (Transaction Books, 398 pp., $34.95).
6 Edited by Steven Fraser, Basic Books, 216 pp., $10.00 (paperback).
7 Jeanne Brooks-Gunn et at, “Early Intervention in Low-Birth-Weight Premature Infants,” JAMA, vol. 272 (1994).
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
“The Bell Curve” and Its Critics
Must-Reads from Magazine
Terror is a choice.
Ari Fuld described himself on Twitter as a marketer and social media consultant “when not defending Israel by exposing the lies and strengthening the truth.” On Sunday, a Palestinian terrorist stabbed Fuld at a shopping mall in Gush Etzion, a settlement south of Jerusalem. The Queens-born father of four died from his wounds, but not before he chased down his assailant and neutralized the threat to other civilians. Fuld thus gave the full measure of devotion to the Jewish people he loved. He was 45.
The episode is a grim reminder of the wisdom and essential justice of the Trump administration’s tough stance on the Palestinians.
Start with the Taylor Force Act. The act, named for another U.S. citizen felled by Palestinian terror, stanched the flow of American taxpayer fund to the Palestinian Authority’s civilian programs. Though it is small consolation to Fuld’s family, Americans can breathe a sigh of relief that they are no longer underwriting the PA slush fund used to pay stipends to the family members of dead, imprisoned, or injured terrorists, like the one who murdered Ari Fuld.
No principle of justice or sound statesmanship requires Washington to spend $200 million—the amount of PA aid funding slashed by the Trump administration last month—on an agency that financially induces the Palestinian people to commit acts of terror. The PA’s terrorism-incentive budget—“pay-to-slay,” as Douglas Feith called it—ranges from $50 million to $350 million annually. Footing even a fraction of that bill is tantamount to the American government subsidizing terrorism against its citizens.
If we don’t pay the Palestinians, the main line of reasoning runs, frustration will lead them to commit still more and bloodier acts of terror. But U.S. assistance to the PA dates to the PA’s founding in the Oslo Accords, and Palestinian terrorists have shed American and Israeli blood through all the years since then. What does it say about Palestinian leaders that they would unleash more terror unless we cross their palms with silver?
President Trump likewise deserves praise for booting Palestinian diplomats from U.S. soil. This past weekend, the State Department revoked a visa for Husam Zomlot, the highest-ranking Palestinian official in Washington. The State Department cited the Palestinians’ years-long refusal to sit down for peace talks with Israel. The better reason for expelling them is that the label “envoy” sits uneasily next to the names of Palestinian officials, given the links between the Palestine Liberation Organization, President Mahmoud Abbas’s Fatah faction, and various armed terrorist groups.
Fatah, for example, praised the Fuld murder. As the Jerusalem Post reported, the “al-Aqsa Martyrs Brigades, the military wing of Fatah . . . welcomed the attack, stressing the necessity of resistance ‘against settlements, Judaization of the land, and occupation crimes.’” It is up to Palestinian leaders to decide whether they want to be terrorists or statesmen. Pretending that they can be both at once was the height of Western folly, as Ari Fuld no doubt recognized.
May his memory be a blessing.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
The end of the water's edge.
It was the blatant subversion of the president’s sole authority to conduct American foreign policy, and the political class received it with fury. It was called “mutinous,” and the conspirators were deemed “traitors” to the Republic. Those who thought “sedition” went too far were still incensed over the breach of protocol and the reckless way in which the president’s mandate was undermined. Yes, times have certainly changed since 2015, when a series of Republican senators signed a letter warning Iran’s theocratic government that the Joint Comprehensive Plan of Action (aka, the Iran nuclear deal) was built on a foundation of sand.
The outrage that was heaped upon Senate Republicans for freelancing on foreign policy in the final years of Barack Obama’s administration has not been visited upon former Secretary of State John Kerry, though he arguably deserves it. In the publicity tour for his recently published memoir, Kerry confessed to conducting meetings with Iranian Foreign Minister Javad Zarif “three or four times” as a private citizen. When asked by Fox News Channel’s Dana Perino if Kerry had advised his Iranian interlocutor to “wait out” the Trump administration to get a better set of terms from the president’s successor, Kerry did not deny the charge. “I think everybody in the world is sitting around talking about waiting out President Trump,” he said.
Think about that. This is a former secretary of state who all but confirmed that he is actively conducting what the Boston Globe described in May as “shadow diplomacy” designed to preserve not just the Iran deal but all the associated economic relief and security guarantees it provided Tehran. The abrogation of that deal has put new pressure on the Iranians to liberalize domestically, withdraw their support for terrorism, and abandon their provocative weapons development programs—pressures that the deal’s proponents once supported.
“We’ve got Iran on the ropes now,” said former Democratic Sen. Joe Lieberman, “and a meeting between John Kerry and the Iranian foreign minister really sends a message to them that somebody in America who’s important may be trying to revive them and let them wait and be stronger against what the administration is trying to do.” This is absolutely correct because the threat Iran poses to American national security and geopolitical stability is not limited to its nuclear program. The Iranian threat will not be neutralized until it abandons its support for terror and the repression of its people, and that will not end until the Iranian regime is no more.
While Kerry’s decision to hold a variety of meetings with a representative of a nation hostile to U.S. interests is surely careless and unhelpful, it is not uncommon. During his 1984 campaign for the presidency, Jesse Jackson visited the Soviet Union and Cuba to raise his own public profile and lend credence to Democratic claims that Ronald Reagan’s confrontational foreign policy was unproductive. House Speaker Jim Wright’s trip to Nicaragua to meet with the Sandinista government was a direct repudiation of the Reagan administration’s support for the country’s anti-Communist rebels. In 2007, as Bashar al-Assad’s government was providing material support for the insurgency in Iraq, House Speaker Nancy Pelosi sojourned to Damascus to shower the genocidal dictator in good publicity. “The road to Damascus is a road to peace,” Pelosi insisted. “Unfortunately,” replied George W. Bush’s national security council spokesman, “that road is lined with the victims of Hamas and Hezbollah, the victims of terrorists who cross from Syria into Iraq.”
Honest observers must reluctantly conclude that the adage is wrong. American politics does not, in fact, stop at the water’s edge. It never has, and maybe it shouldn’t. Though it may be commonplace, American political actors who contradict the president in the conduct of their own foreign policy should be judged on the policies they are advocating. In the case of Iran, those who seek to convince the mullahs and their representatives that repressive theocracy and a terroristic foreign policy are dead-ends are advancing the interests not just of the United States but all mankind. Those who provide this hopelessly backward autocracy with the hope that America’s resolve is fleeting are, as John Kerry might say, on “the wrong side of history.”
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Michael Wolff is its Marquis de Sade. Released on January 5, 2018, Wolff’s Fire and Fury became a template for authors eager to satiate the growing demand for unverified stories of Trump at his worst. Wolff filled his pages with tales of the president’s ignorant rants, his raging emotions, his television addiction, his fast-food diet, his unfamiliarity with and contempt for Beltway conventions and manners. Wolff made shocking insinuations about Trump’s mental state, not to mention his relationship with UN ambassador Nikki Haley. Wolff’s Trump is nothing more than a knave, dunce, and commedia dell’arte villain. The hero of his saga is, bizarrely, Steve Bannon, who in Wolff’s telling recognized Trump’s inadequacies, manipulated him to advance a nationalist-populist agenda, and tried to block his worst impulses.
Wolff’s sources are anonymous. That did not slow down the press from calling his accusations “mind-blowing” (Mashable.com), “wild” (Variety), and “bizarre” (Entertainment Weekly). Unlike most pornographers, he had a lesson in mind. He wanted to demonstrate Trump’s unfitness for office. “The story that I’ve told seems to present this presidency in such a way that it says that he can’t do this job, the emperor has no clothes,” Wolff told the BBC. “And suddenly everywhere people are going, ‘Oh, my God, it’s true—he has no clothes.’ That’s the background to the perception and the understanding that will finally end this, that will end this presidency.”
Nothing excites the Resistance more than the prospect of Trump leaving office before the end of his term. Hence the most stirring examples of Resistance Porn take the president’s all-too-real weaknesses and eccentricities and imbue them with apocalyptic significance. In what would become the standard response to accusations of Trumpian perfidy, reviewers of Fire and Fury were less interested in the truth of Wolff’s assertions than in the fact that his argument confirmed their preexisting biases.
Saying he agreed with President Trump that the book is “fiction,” the Guardian’s critic didn’t “doubt its overall veracity.” It was, he said, “what Mailer and Capote once called a nonfiction novel.” Writing in the Atlantic, Adam Kirsch asked: “No wonder, then, Wolff has written a self-conscious, untrustworthy, postmodern White House book. How else, he might argue, can you write about a group as self-conscious, untrustworthy, and postmodern as this crew?” Complaining in the New Yorker, Masha Gessen said Wolff broke no new ground: “Everybody” knew that the “president of the United States is a deranged liar who surrounded himself with sycophants. He is also functionally illiterate and intellectually unsound.” Remind me never to get on Gessen’s bad side.
What Fire and Fury lacked in journalistic ethics, it made up in receipts. By the third week of its release, Wolff’s book had sold more than 1.7 million copies. His talent for spinning second- and third-hand accounts of the president’s oddity and depravity into bestselling prose was unmistakable. Imitators were sure to follow, especially after Wolff alienated himself from the mainstream media by defending his innuendos about Haley.
It was during the first week of September that Resistance Porn became a competitive industry. On the afternoon of September 4, the first tidbits from Bob Woodward’s Fear appeared in the Washington Post, along with a recording of an 11-minute phone call between Trump and the white knight of Watergate. The opposition began panting soon after. Woodward, who like Wolff relies on anonymous sources, “paints a harrowing portrait” of the Trump White House, reported the Post.
No one looks good in Woodward’s telling other than former economics adviser Gary Cohn and—again bizarrely—the former White House staff secretary who was forced to resign after his two ex-wives accused him of domestic violence. The depiction of chaos, backstabbing, and mutual contempt between the president and high-level advisers who don’t much care for either his agenda or his personality was not so different from Wolff’s. What gave it added heft was Woodward’s status, his inviolable reputation.
“Nothing in Bob Woodward’s sober and grainy new book…is especially surprising,” wrote Dwight Garner at the New York Times. That was the point. The audience for Wolff and Woodward does not want to be surprised. Fear is not a book that will change minds. Nor is it intended to be. “Bob Woodward’s peek behind the Trump curtain is 100 percent as terrifying as we feared,” read a CNN headline. “President Trump is unfit for office. Bob Woodward’s ‘Fear’ confirms it,” read an op-ed headline in the Post. “There’s Always a New Low for the Trump White House,” said the Atlantic. “Amazingly,” wrote Susan Glasser in the New Yorker, “it is no longer big news when the occupant of the Oval Office is shown to be callous, ignorant, nasty, and untruthful.” How could it be, when the press has emphasized nothing but these aspects of Trump for the last three years?
The popular fixation with Trump the man, and with the turbulence, mania, frenzy, confusion, silliness, and unpredictability that have surrounded him for decades, serves two functions. It inoculates the press from having to engage in serious research into the causes of Trump’s success in business, entertainment, and politics, and into the crises of borders, opioids, stagnation, and conformity of opinion that occasioned his rise. Resistance Porn also endows Trump’s critics, both external and internal, with world-historical importance. No longer are they merely journalists, wonks, pundits, and activists sniping at a most unlikely president. They are politically correct versions of Charles Martel, the last line of defense preventing Trump the barbarian from enacting the policies on which he campaigned and was elected.
How closely their sensational claims and inflated self-conceptions track with reality is largely beside the point. When the New York Times published the op-ed “I am Part of the Resistance Inside the Trump Administration,” by an anonymous “senior official” on September 5, few readers bothered to care that the piece contained no original material. The author turned policy disagreements over trade and national security into a psychiatric diagnosis. In what can only be described as a journalistic innovation, the author dispensed with middlemen such as Wolff and Woodward, providing the Times the longest background quote in American history. That the author’s identity remains a secret only adds to its prurient appeal.
“The bigger concern,” the author wrote, “is not what Mr. Trump has done to the presidency but what we as a nation have allowed him to do to us.” Speak for yourself, bud. What President Trump has done to the Resistance is driven it batty. He’s made an untold number of people willing to entertain conspiracy theories, and to believe rumor is fact, hyperbole is truth, self-interested portrayals are incontrovertible evidence, credulity is virtue, and betrayal is fidelity—so long as all of this is done to stop that man in the White House.
Choose your plan and pay nothing for six Weeks!
Review of 'Stanley Kubrick' By Nathan Abrams
Except for Stanley Donen, every director I have worked with has been prone to the idea, first propounded in the 1950s by François Truffaut and his tendentious chums in Cahiers du Cinéma, that directors alone are authors, screenwriters merely contingent. In singular cases—Orson Welles, Michelangelo Antonioni, Woody Allen, Kubrick himself—the claim can be valid, though all of them had recourse, regular or occasional, to helping hands to spice their confections.
Kubrick’s variety of topics, themes, and periods testifies both to his curiosity and to his determination to “make it new.” Because his grades were not high enough (except in physics), this son of a Bronx doctor could not get into colleges crammed with returning GIs. The nearest he came to higher education was when he slipped into accessible lectures at Columbia. He told me, when discussing the possibility of a movie about Julius Caesar, that the great classicist Moses Hadas made a particularly strong impression.
While others were studying for degrees, solitary Stanley was out shooting photographs (sometimes with a hidden camera) for Look magazine. As a movie director, he often insisted on take after take. This gave him choices of the kind available on the still photographer’s contact sheets. Only Peter Sellers and Jack Nicholson had the nerve, and irreplaceable talent, to tell him, ahead of shooting, that they could not do a particular scene more than two or three times. The energy to electrify “Mein Führer, I can walk” and “Here’s Johnny!” could not recur indefinitely. For everyone else, “Can you do it again?” was the exhausting demand, and it could come close to being sadistic.
The same method could be applied to writers. Kubrick might recognize what he wanted when it was served up to him, but he could never articulate, ahead of time, even roughly what it was. Picking and choosing was very much his style. Cogitation and opportunism went together: The story goes that he attached Strauss’s Blue Danube to the opening sequence of 2001 because it happened to be playing in the sound studio when he came to dub the music. Genius puts chance to work.
Until academics intruded lofty criteria into cinema/film, the better to dignify their speciality, Alfred Hitchcock’s attitude covered most cases: When Ingrid Bergman asked for her motivation in walking to the window, Hitch replied, fatly, “Your salary.” On another occasion, told that some scene was not plausible, Hitch said, “It’s only a movie.” He did not take himself seriously until the Cahiers du Cinéma crowd elected to make him iconic. At dinner, I once asked Marcello Mastroianni why he was so willing to play losers or clowns. Marcello said, “Beh, cinema non e gran’ cosa” (cinema is no big deal). Orson Welles called movie-making the ultimate model-train set.
That was then; now we have “film studies.” After they moved in, academics were determined that their subject be a very big deal indeed. Comedy became no laughing matter. In his monotonous new book, the film scholar Nathan Abrams would have it that Stanley Kubrick was, in essence, a “New York Jewish intellectual.” Abrams affects to unlock what Stanley was “really” dealing with, in all his movies, never mind their apparent diversity. It is declared to be, yes, Yiddishkeit, and in particular, the Holocaust. This ground has been tilled before by Geoffrey Cocks, when he argued that the room numbers in the empty Overlook Hotel in The Shining encrypted references to the Final Solution. Abrams would have it that even Barry Lyndon is really all about the outsider seeking, and failing, to make his awkward way in (Gentile) Society. On this reading, Ryan O’Neal is seen as Hannah Arendt’s pariah in 18th-century drag. The movie’s other characters are all engaged in the enjoyment of “goyim-naches,” an expression—like menschlichkayit—he repeats ad nauseam, lest we fail to get the stretched point.
Theory is all when it comes to the apotheosis of our Jew-ridden Übermensch. So what if, in order to make a topic his own, Kubrick found it useful to translate its logic into terms familiar to him from his New York youth? In Abrams’s scheme, other mundane biographical facts count for little. No mention is made of Stanley’s displeasure when his 14-year-old daughter took a fancy to O’Neal. The latter was punished, some sources say, by having Barry’s voiceover converted from first person so that Michael Hordern would displace the star as narrator. By lending dispassionate irony to the narrative, it proved a pettish fluke of genius.
While conning Abrams’s volume, I discovered, not greatly to my chagrin, that I am the sole villain of the piece. Abrams calls me “self-serving” and “unreliable” in my accounts of my working and personal relationship with Stanley. He insinuates that I had less to do with Eyes Wide Shut than I pretend and that Stanley regretted my involvement. It is hard for him to deny (but convenient to omit) that, after trying for some 30 years to get a succession of writers to “crack” how to do Schnitzler’s Traumnovelle, Kubrick greeted my first draft with “I’m absolutely thrilled.” A source whose anonymity I respect told me that he had never seen Stanley so happy since the day he received his first royalty check (for $5 million) for 2001. No matter.
Were Abrams (the author also of a book as hostile to Commentary as this one is to me) able to put aside his waxed wrath, he might have quoted what I reported in my memoir Eyes Wide Open to support his Jewish-intellectual thesis. One day, Stanley asked me what a couple of hospital doctors, walking away with their backs to the camera, would be talking about. We were never going to hear or care what it was, but Stanley—at that early stage of development—said he wanted to know everything. I said, “Women, golf, the stock market, you know…”
“Couple of Gentiles, right?”
“That’s what you said you wanted them to be.”
“Those people, how do we ever know what they’re talking about when they’re alone together?”
“Come on, Stanley, haven’t you overheard them in trains and planes and places?”
Kubrick said, “Sure, but…they always know you’re there.”
If he was even halfway serious, Abrams’s banal thesis that, despite decades of living in England, Stanley never escaped the Old Country, might have been given some ballast.
Now, as for Stanley Kubrick’s being an “intellectual.” If this implies membership in some literary or quasi-philosophical elite, there’s a Jewish joke to dispense with it. It’s the one about the man who makes a fortune, buys himself a fancy yacht, and invites his mother to come and see it. He greets her on the gangway in full nautical rig. She says, “What’s with the gold braid already?”
“Mama, you have to realize, I’m a captain now.”
She says, “By you, you’re a captain, by me, you’re a captain, but by a captain, are you a captain?”
As New York intellectuals all used to know, Karl Popper’s definition of bad science, and bad faith, involves positing a theory and then selecting only whatever data help to furnish its validity. The honest scholar makes it a matter of principle to seek out elements that might render his thesis questionable.
Abrams seeks to enroll Lolita in his obsessive Jewish-intellectual scheme by referring to Peter Arno, a New Yorker cartoonist whom Kubrick photographed in 1949. The caption attached to Kubrick’s photograph in Look asserted that Arno liked to date “fresh, unspoiled girls,” and Abrams says this “hint[s] at Humbert Humbert in Lolita.” Ah, but Lolita was published, in Paris, in 1955, six years later. And how likely is it, in any case, that Kubrick wrote the caption?
The film of Lolita is unusual for its garrulity. Abrams’s insistence on the sinister Semitic aspect of both Clare Quilty and Humbert Humbert supposedly drawing Kubrick like moth to flame is a ridiculous camouflage of the commercial opportunism that led Stanley to seek to film the most notorious novel of the day, while fudging its scandalous eroticism.
That said, in my view, The Killing, Paths of Glory, Barry Lyndon, and Clockwork Orange were and are sans pareil. The great French poet Paul Valéry wrote of “the profundity of the surface” of a work of art. Add D.H. Lawrence’s “never trust the teller, trust the tale,” and you have two authoritative reasons for looking at or reading original works of art yourself and not relying on academic exegetes—especially when they write in the solemn, sometimes ungrammatical style of Professor Abrams, who takes time out to tell those of us at the back of his class that padre “is derived from the Latin pater.”
Abrams writes that I “claim” that I was told to exclude all overt reference to Jews in my Eyes Wide Shut screenplay, with the fatuous implication that I am lying. I am again accused of “claiming” to have given the name Ziegler to the character played by Sidney Pollack, because I once had a (quite famous) Hollywood agent called Evarts Ziegler. So I did. The principal reason for Abrams to doubt my veracity is that my having chosen the name renders irrelevant his subsequent fanciful digression on the deep, deep meanings of the name Ziegler in Jewish lore; hence he wishes to assign the naming to Kubrick. Pop goes another wished-for proof of Stanley’s deep and scholarly obsession with Yiddishkeit.
Abrams would be a more formidable enemy if he could turn a single witty phrase or even abstain from what Karl Kraus called mauscheln, the giveaway jargon of Jewish journalists straining to pass for sophisticates at home in Gentile circles. If you choose, you can apply, on line, for screenwriting lessons from Nathan Abrams, who does not have a single cinematic credit to his name. It would be cheaper, and wiser, to look again, and then again, at Kubrick’s masterpieces.