y favorite moment on Election Night —and believe me, I did not have many of them, this I can tell you—came when John King, a reporter for CNN, stood before a giant electoral map. It was hemorrhaging red, as one battleground state after another fell to Donald Trump. It was supposed to be hemorrhaging blue. For months, everybody in the world of Washington politics had been telling one another that most of those states were safely Hillary Clinton’s. Everybody.
“Clearly,” King said, “for the last several weeks we have not been having a reality-based conversation.”
He was right. What we’d been having was a data-based conversation. The difference between the two, it turns out, is very large. Huuuuuge, you could say.
And then, in the first few days in Washington after the Trumpian triumph, I heard a recurring phrase: “Nobody knows anything.” It is a favorite saying of know-it-alls when they hope to strike a humble pose. I say it all the time, for example. In case you don’t know (ahem) the phrase comes from a Hollywood memoir by the screenwriter William Goldman.
To make a hit movie, Goldman wrote, “not one person in the entire motion picture field knows for a certainty what’s going to work.” This holds true even after millions of dollars are spent trying to anticipate a movie’s box-office appeal through focus groups, quantitative historical analysis, statistical modeling and the other tools of modern social science. Goldman summarized the futility with his famous phrase: “Nobody knows anything.”
It’s not a saying to be taken literally, of course, even in Washington. For all our faults, the chattering class in Washington know lots of things, even beyond the price of an amusing Pinot at Arrowine or the best way to flatter the maître d’ at Pineapple and Pearls. What Washington does not know is how much it does not know. Into this unacknowledged maw of ignorance, policymakers and political consultants and journalists pour the findings of social science—which is to say, data. The word itself has shrunk from the plural to the singular, to make it easier to toss around.
In turning to social science, in making it the source of knowledge about the behavior of human beings, the political world is not alone. Indeed, it came rather late to the game. For a generation now, every businessman in the developed world has been bragging about how big his data is. Where he once might have trusted his intuition and experience and acquired wisdom to sense the movement of a marketplace, he now hires bespectacled, poorly groomed youngsters with sloping shoulders to produce algorithms that can be pasted into power points, which will in turn wow the shareholders and board of directors. Every field of American life—preschool education, baseball, book publishing—has succumbed to the conceits of social science.
The arrival of Barack Obama brought the reliance on data in politics and government policy to its present level of intensity. His election in 2008 became Exhibit A. His campaign staff’s obsession with “quantitative analysis” was plausibly credited with finding pockets of previously ignored Democrats and getting them to the polls. The magic failed four years later when Obama’s turnout actually fell, but no matter. The data delusion is hard to shake. “Computer algorithms,” reported Politico this fall, “underlie nearly all of the Clinton campaign’s decisions.” A team of 60 mathematicians dictated the precise wording of email solicitations, the size of personalized ads on Facebook, and, needless to say, the candidate’s travel schedule. Of course, the candidate lost. With another 60 nerds, she might have won in a landslide.
The complaints and chagrin came in waves: first about how mistaken the polls had been in the weeks before Election Day, then about how misleading the exit polls had been on Election Day itself.
The data delusion grips us even as the failure of data lies all around us. A politician’s reliance on polls is matched only by the well-documented capriciousness of polling. Every quarter, government economists make forecasts that are duly reported by journalists and never come true—a fact that journalists report much less often. No policymaker, no matter how consumed by data, foresaw the greatest social calamity of our young century, the financial collapse of 2007 and 2008. And the collapse itself was at least in part a failure of “quants” who had come to believe their statistical models allowed them to manipulate markets to their advantage. Meanwhile, for nearly a decade, the wizards at the Federal Reserve have used the most impeccable data to orchestrate a robust economic recovery that always seems a few quarters away.
Now, though, even the most data-driven geek should find the evidence impossible to ignore—because the failure is being quantified. Last year, in an epochal development science journalists covered for a day or two and then promptly forgot, a large team of social psychologists conducted experiments to replicate the most commonly held findings in their field, precisely the kind that guide behavioral economics and other brainy fads. Fewer than 40 of the 100 attempts confirmed the findings. Other spectacular failures in replication have since followed. Social science is riddled with methodological weaknesses, but most of them reduce to this: The unpredictable flow of human life can be numerically captured in only the crudest terms, if at all. The resemblance between any finding of social psychology and the behavior of real human beings in the real world is likely to be mere happenstance.
“Data died tonight,” tweeted out the Republican consultant (and jolly Trump opponent) Mike Murphy after Trump’s win. We should be so lucky. On TV and in the political twitterverse, the complaints and chagrin came in waves: first about how mistaken the polls had been in the weeks before Election Day, then about how misleading the exit polls had been on Election Day itself. Within twelve hours, the same pundits and analysts were offering the same exit polls to prove their authoritative conclusions about the electorate’s demographics and voting patterns. The thirst cannot be slaked, the delusion cannot be overcome.
If we want to engage in “reality based” rather than “data based” conversations—a big if—we should keep in mind a tale told by the liberal economist Kenneth Arrow. He served in the Army during World War II, working as a statistician and weather specialist in Europe.
“Some of my colleagues had the responsibility of preparing long-range weather forecasts, i.e., for the following month,” Arrow wrote. “The statisticians among us subjected these forecasts to verification and found they differed in no way from chance.”
Arrow and his colleagues, surprised and alarmed, sent word up through the ranks to the commanding officer, alerting him to their important discovery. After days of waiting, the word at last came down from a high-ranking aide.
“The Commanding General is well aware that the forecasts are no good,” the aide told Arrow. “However, he needs them for planning purposes.”