Das Kapital Lite for the 21st century.
One: Piketty’s Charge
If politics is a war by other means, then books often serve as the artillery. Take Herbert Croly’s The Promise of American Life, the foundational text of modern American liberalism. Its publication in 1909 was heralded as a revelation, when in fact it was more a synthesis of notions that had been floating around in the political ether for a decade. Theodore Roosevelt loved it. “I do not know when I have read a book which I felt profited me as much,” T.R. wrote to Croly. “I shall use your ideas freely in speeches I intend to make. I know you won’t object to my doing so, because, my dear sir, I can see that your purpose is to do your share in any way for the betterment of our national life.” But what Roosevelt loved most was that Croly was saying exactly what T.R. wanted to hear.
Christopher Lasch noted that “Croly did not so much influence Roosevelt as read into his career an intellectual coherence which Roosevelt then adopted as his own view of things.” Others found Croly similarly useful. Walter Lippmann dubbed him the “first important political philosopher” of the 20th century. Felix Frankfurter hailed The Promise of American Life as the “most powerful single contribution to progressive thinking.” Like Ferris Bueller, leaping in front of the parade and pretending he was leading it, Croly got out in front of an idea whose time had come.
This happens every decade or so. Writers as varied as the economic historians Charles and Mary Beard, the public intellectual Walter Lippmann, the economists Friedrich Hayek and John Kenneth Galbraith, the sociologist David Riesman, the legal scholar Charles A. Reich, the biologist Paul Ehrlich, the philosopher Allan Bloom, the historian of international relations Paul Kennedy, and political scientist Francis Fukuyama all captured sociological lightning in a bottle by publishing bestselling books that imposed some coherence on the national anxieties and ambitions of their moment in time.
Revisiting many of these works can lead to bewilderment. What on earth could all the fuss have been about? Reich’s The Greening of America is a miserable mess, replete with hippy-dippy nonsense and messianic gobbledygook that might have made some sense in 1970 but seems hilariously dated now. Riesman’s The Lonely Crowd divided people into psycho-sociological categories—inner-directed, tradition-directed, other-directed—that almost seem parodic today. Hayek’s The Road to Serfdom (1944) holds up much better, but unlike his other works, its significance is more sociological than analytical. Similarly, the analysis in Galbraith’s The Affluent Society (1958) is more valuable as an insight into the hubris of what would become Great Society liberalism than as a serious empirical guide to political economy. As for The Promise of American Life itself, it is a strange and tedious read, though fortunately for Croly’s reputation, few people actually read it anymore.
It remains to be seen what history will make of Thomas Piketty’s Capital in the Twenty-First Century, which was released in America in April. But it was so perfectly timed that it joined the ranks of those lightning-in-a-bottle books even before its publication. Piketty purports to offer a “general theory of capitalism,” in the words of the economist Tyler Cowen. His theory is that capitalism inherently leads to ever-widening income inequality that can be addressed only through heavy taxes on accumulated wealth. In December 2013, President Obama prepared the intellectual battlefield for Piketty by declaring that income inequality was now “the defining challenge of our time.” As the enormous and dense tome finally settled in at the top of the charts, Hillary Clinton previewed a presidential campaign stump speech of sorts, which largely focused on Piketty’s core theme: inequality. Even the pope got in on the act. Adding a religious dimension to Piketty’s theories on Twitter, he declared in late April that “inequality is the root of social evil” and called for “the legitimate redistribution of economic benefits by the State.”
In short, Capital in the Twenty-First Century provides some coherence to an idea whose time has come. And for those who already agreed with its thesis from reading the introduction, it has become not so much The Promise of American Life but the Democracy in America of our time. Indeed, no doubt because Piketty is French, the comparisons to Alexis de Tocqueville have been ubiquitous. In the words of Yale’s Jacob Hacker and Berkeley’s Paul Piereson: “Like Tocqueville, Piketty has given us a new image of ourselves.” They add: “This time, it’s one we should resist, not welcome.”
According to Boris Kachka of New York magazine, “One hundred and eighty years after Alexis de Tocqueville came back to France with the news that he’d found true égalité in America, his countryman has arrived on our shores to deliver the opposite news.”
Taken literally, the comparison between the two writers is ridiculous. Alexis de Tocqueville spent nine months (May 1831 to February 1832) traveling throughout America, talking to politicians, laborers, clergy, businessmen, jailers, and convicts. His journeys took him through most of the country, from the great population centers of the East to Georgia, Alabama, Tennessee, Louisiana, and the wilderness of Michigan. He sailed down the Mississippi River. Piketty, on the other hand, has seen very little of America. The 43-year-old French economist studied and taught briefly at MIT 20 years ago. (He describes it as a “university near Boston.”) By his own admission, until his book tour, he had barely left Paris since he was 25.1
There is one way in which the comparison to Tocqueville is revealing. Democracy in America’s lasting popularity derives from the fact that it describes a scrappy, idealistic, energetic America that many wanted and still want to believe in. Similarly, Piketty’s vision of America is one that his admirers very much want to believe is true—even though they say they do not want America to resemble Piketty’s description of it. This is why virtually none of his favorable reviewers has responded to his arguments with depressed and rueful shock at his deeply dystopic conclusions. Instead, they have greeted the publication with a triumphant “I told you so.”2 Capital in the Twenty-First Century is the artillery shell his supporters have long been waiting for to begin the war against “economic inequality.”
Two: Piketty’s Claim
Piketty’s overarching argument is that Karl Marx was essentially correct when he identified what might be called the original sin of capitalism: the problem of “infinite accumulation.” This is the idea that the rich get richer and the poor get poorer. According to Piketty, it’s what happened when capitalism was left to its own devices at the end of the 19th century, and it’s what is about to happen in the United States and Europe in the 21st. There was, he says, a brief flattening-out of inequality in the middle of the 20th century, thanks to the devastation of two world wars, which destroyed enormous amounts of wealth and fueled huge spikes in taxation. But otherwise the story has remained the same.
Do the dynamics of private capital accumulation inevitably lead to the concentration of wealth in ever fewer hands, as Karl Marx believed in the nineteenth century? Or do the balancing forces of growth, competition, and technological progress lead in later stages of development to reduced inequality and greater harmony among the classes…?
Given this either/or, Piketty essentially sides with Marx. I say “essentially” because there is much bickering about whether it is fair or right to call Piketty a Marxist. Paul Krugman, for instance, finds the idea ridiculous, despite the fact that the very title of the book is an homage to Marx’s Das Kapital and that Piketty says Marx asked the right questions even if some of his answers had “limitations.” Piketty himself rejects the Marxist label, presents his arguments in neoclassical terms, and describes himself as a social democrat.
Others have called Piketty’s approach “soft Marxism.” But with apologies to Stephen Colbert, I’d call it “Marxiness.” Piketty attempts to avoid Marx’s scientistic messianism by proffering caveats like “one should be wary of economic determinism.” Yes, one should. But Piketty has a grating habit of offering seemingly deflating qualifiers and “to be sures” only to proceed—à la an unreconstructed Marxist—to argue as if science and objective truth are unquestionably on his side.
He concludes that the problem with capitalism is that “there is no natural, spontaneous process to prevent destabilizing, inegalitarian forces from prevailing permanently.” Rather, capitalism is structurally (or objectively, as the old Marxists might say) inegalitarian. It is a rigged casino where the winners not only keep winning but don’t deserve their chips in the first place.
His proof comes in the form of r > g, already the most famous mathematical formula since E=MC2. R is the rate of return on capital (investments, interest on savings, rent from land). G is the growth rate of the broader economy. The problem, according to Piketty, is that the rate of return on capital is greater than the growth of the broader economy. He postulates that if capital grows faster than national income, specifically income earned through wages, over time the capitalists will come to own everything unless something stops that from happening.
Piketty dismisses the claim that the free market self-corrects. He essentially rejects the belief that the law of diminishing returns applies to capital. Most economists hold that if there’s too much capital chasing too few opportunities for investment, the return on capital will inevitably drop. Such corrections, in his view, are fleeting shifts in the current of an ever-rising tide of inequality. And even when they occur, they don’t amount to much:
Never mind that such adjustments might be unpleasant or complicated; they might also take decades, during which landlords and oil well owners might accumulate claims on the rest of the population so extensive that they could easily own everything that can be owned, including rural real estate and bicycles, once and for all. As always, the worst is never certain to arrive. It is much too soon to warn readers that by 2050 they may be paying rent to the emir of Qatar.
Piketty asserts that the return on capital (the r in r > g) holds steady at about 5 percent over time. This means that once you’re rich, you keep getting richer thanks to the miracle of compound interest. Inherited wealth, or old money, expands forever—or, as Piketty puts it in a memorable line, “the past devours the future.”
Piketty’s occasional concessions to uncertainty about his most dire predictions illustrate one reason he shouldn’t be considered an orthodox Marxist. He has no grand Hegelian theory of the ineluctable progression of History with a capital H. But who needs dialectical materialism when you have algebra?
Indeed, his primary claim to originality comes from a statistical tendency he discerns through masses of data, according to which the free market yields a society in which the rich not only get richer but get richer faster than everyone else and ultimately leave the poor behind. This is, he says, the “central contradiction of capitalism.” He goes on:
Once constituted, capital reproduces itself faster than output increases. The past devours the future. The consequences for the long-term dynamics of the wealth distribution are potentially terrifying, especially when one adds that the divergence in wealth distribution is occurring on a global scale.
According to Piketty, we are not only returning to levels of income inequality not seen since the 19th century. We are also looking at a potentially eternal future where the overclass rules at the expense of the ever-growing underclasses. It’s economic Morlocks versus Eloi all the way down.
Matters would appear to be hopeless. But not to worry. Piketty has hope. What gives him hope, and what excites so many of his fans, is that this central contradiction of capitalism can be overpowered by the state.
His key proposal is what he calls a “global wealth tax” of 5 to 10 percent off the top for billionaires, 2 percent for people worth 5 million euros or more, and 1 percent for millionaires below that. He also advocates a top marginal tax rate of 80 percent. And that ain’t the half of it—literally. It’s more like less than a quarter of it. “If one follows Piketty in assuming a normal return on capital of 4 percent for the 21st century,” Stefan Homburg of the University of Leibnitz has written, “a 10 percent tax on wealth is equivalent to a 250 percent tax on the resulting capital income. Combined with the 80 percent income tax, taxpayers would face effective tax rates of up to 330 percent.”
How and by whom this money would be collected is kept rather vague, in part because even Piketty concedes that this proposal is “utopian.” More interesting, he is not especially concerned about what to do with these revenues. Leveling the gap between the rich and the rest of us is a much larger priority for him than lifting up the poor. “Confiscatory tax rates on incomes deemed to be indecent” are worthwhile in their own right, he says. Such rates, which reached 90 percent in the United States at one point, were an “impressive U.S. innovation of the interwar years.” He says this even though he concedes that a high marginal tax rate on extremely high incomes actually “brings in almost nothing” (because the rich would simply stop taking proceeds in taxable form). He does concede in a wonderful understatement at the end of the book that “before we can learn to efficiently organize public financing equivalent to two-thirds to three-quarters of national income”—what his desired tax rates would amount to—“it would be good to improve the organization and operation of the existing public sector.” There’s a useful insight.
His comfort with punitive taxation is reminiscent of Barack Obama’s response in 2008 when asked if he would support a higher tax on capital gains even if he knew it would bring in less revenue. Obama answered that he would still favor raising such taxes for “purposes of fairness.” In short, some people don’t deserve the money they have, and the government should take it from them.
Is Piketty right about the fundamental contradiction of capitalism? And, if he is, how much should we care?
The first question has long been debated by nearly every major economist in the Western world. The second has received much, much less attention. Let’s focus on the first for now.
Three: Piketty’s Data
The general consensus even from very critical economists—and there are many—is that Piketty and his colleagues (chiefly his frequent writing partner, Berkeley economist Emmanuel Saez) have masterfully collected an amazing amount of data that describe some very interesting trends over the past 300 years. They have made massive databases with information culled from tax returns, estate records, and virtually every other source they could find. They plausibly argue that such records are more valuable and accurate than conventional surveys because the sample size of responses from the wealthiest individuals are simply too small to give a clear picture of inequality. Capital in the Twenty-First Century is largely a repackaging of that work. But for Piketty and his fans, it amounts to nothing less than the spread of the Big Data revolution to economic history. Maybe so. But his analysis of those data is far more controversial.
One reason for the controversy is that Piketty oversimplifies the concept of capital. He depicts it “as a growing, homogeneous blob which, at least under peaceful conditions, ends up overshadowing other economic variables,” in the words of economist Tyler Cowen. But different kinds of capital have different rates of return. Right now Treasury bills yield barely better than a 1 percent return, while equities historically have a return of about 7 percent. As Cowen notes in an essay for Foreign Affairs, this alone reveals a certain blind spot in Piketty’s analysis: the hugely significant role of risk-taking in a free-market economy.
The most common and strongest complaint is that Piketty’s arrangement of the data paints a false picture of rising inequality in the United States. Harvard’s Martin Feldstein noted in the Wall Street Journal that Piketty fails to take into account important—albeit arcane—changes in the tax code that have caused business income to be counted on personal tax returns. “This transformation occurred gradually over many years as taxpayers changed their behavior and their accounting practices to reflect the new rules,” Feldstein writes. As an example, “the business income of Subchapter S corporations alone rose from $500 billion in 1986 to $1.8 trillion by 1992.” This leads Feldstein to conclude that Piketty “creates the false impression of a sharp rise in the incomes of high-income taxpayers even though there was only a change in the legal form of that income.”
Feldstein and Scott Winship, of the Manhattan Institute, identify another methodological problem. By focusing on tax returns (instead of household surveys and the like), Piketty fails to take into account the already sizable redistributive elements of our tax code. One in three Americans receives some means-tested government aid today. And that percentage will only grow as people live longer in retirement than ever before. In other words, social security, housing assistance, food aid, etc. don’t show up in Piketty’s portrait of inequality. Winship also notes that his method lumps together many young workers who might live at home and spouses who work only part time. Perhaps more significant, in Piketty’s data, capital gains are registered as a one-time windfall. In other words, if you buy shares in a mutual fund and you hold onto that asset for 25 years, the gains you realize when you sell are counted as income in a single year. But in fact, they’ve been earned over a quarter century. And by “excluding non-taxable capital gains,” Winship wrote in National Review,“most wealth accruing to the middle and working class, which comes in the form of home sales or 401(k) and IRA investments, is invisible in Piketty’s data.”
Then there is Piketty’s use, or abuse, of r > g. “Pretty much every economics textbook will tell you that r > g,” writes American Enterprise Institute economist Andrew Biggs. “But none of the textbook models take from this that the capital stock will rise endlessly relative to the economy. Most of them hold that it stays pretty constant, and the historical evidence supports that view.”
Indeed, as Homburg notes, historical evidence shows that the divide between wealth and income doesn’t eternally widen simply because r is greater than g. The evidence for this can be found in Piketty’s own book, which shows that for the last two centuries, the wealth-to-income ratio in the United States and Canada has remained fairly stable. This North American exception is important because, unlike Europe and Japan, we were not subjected to the physical devastation of the world wars (a topic I will return to later).
Homburg, the American Enterprise Institute’s Kevin Hassett, and a team at the Sciences Po in Paris, moreover, argue that the recent widening of the wealth-to-income gap in the United States that Piketty reports is largely a function of a housing boom in the past 30 years. This fact complicates the story. The housing boom has benefited rich people, to be sure, but it has also been fueled by a massive expansion of home ownership among not only the wealthy but also the middle and lower classes (though not in proportion to gains by the wealthy). “The largest single component of capital in the United States is owner-occupied housing,” notes the liberal economist Lawrence Summers in his review of the book for Democracy. “Its return comes in the form of the services enjoyed by the owners—what economists call ‘imputed rent’—which are all consumed rather than reinvested since they do not take a financial form.”
Also, housing booms cannot go on forever. If you exclude housing from other forms of wealth or capital (Piketty explicitly uses the terms interchangeably), these economists argue, the return on capital is less robust. “In the U.S.,” the Sciences Po economists write, “the net capital income ratio of housing capital was the same in 1770 as it was in 2010 and there is neither a long run trend nor a recent increase of this ratio.” They add: “This type of situation, where a small share of the population owns most of the housing capital, appears to be far from the current situation of developed countries, where the homeownership rate varies between 40 percent and 70 percent. The diffusion of homeownership is likely to slow or even reverse the rise of inequality regardless of trends in housing prices.” Ultimately, the Sciences Po economists found that their conclusions about inequality in recent years “are exactly opposite to those found by Thomas Piketty.”
Other critics raise a different objection. According to Saez, the largest portion of rising wealth has been in the growth of pension savings, which is a very good thing by most accounts. This is important for two reasons. First, pensions, while disproportionately held by the wealthy, are nonetheless very widely held (by teachers, policemen, autoworkers, et al.). Second, as Forbes’s Tim Worstall notes, pension wealth is generally not inheritable. Indeed, by design, it is intended to be spent.
But in order for Piketty’s invincible confidence that “the past will devour the future” to hold, wealthy people can’t spend down their money, because then it would circulate through the broader economy, raise the fortunes of others, and reduce their own net wealth. But one needs only to look outside the window to see that they do. The wealthy spend their money on cars, houses, boats, and, of course, their own children. Doing so depletes their own wealth holdings and increases the incomes of the less wealthy who provide these goods. They also spend it on museum wings, hospitals, charities of all kinds (even this magazine, a 501(c)3 to which you should be donating if you’re not already), and even progressive reform efforts of the kind Piketty surely endorses. Whatever the motive, they spend down their capital stock relentlessly—a major reason, in the United States and Canada especially, the wealth-to-income ratio has stayed relatively constant. As Feldstein notes, Piketty’s assumption about the rich might be true if every individual rich person lived forever.
Another controversial critique emerged on May 22, when the Financial Times’s Chris Giles reported that his exhaustive examination of Piketty’s data revealed a host of errors or misjudgments—some minor, some potentially damning. According to Giles, Piketty’s data do not support his conclusions, and Piketty may have tweaked the numbers to make his trend lines go the way he wanted them to. The “combined result of all the problems,” Giles writes, “is to make wealth concentration among the richest in the past 50 years rise artificially.”
As of this writing, Piketty and Giles, as well as their various champions, were trying to adjudicate all the charges and defenses. The debates are extremely difficult to follow, to say the least. But it does seem that Giles overstated the lethality of his critique and that, some sloppiness or misjudgments notwithstanding, there’s little evidence that Piketty operated in anything like bad faith. Piketty has recanted nothing.
Still, if one takes all these critiques into account, one must conclude that what its supporters have hailed as an irrefutable mathematical prophecy might have to be downgraded by everyone else into the well-informed hunch from a left-leaning French economist—a significant drop in confidence level, as the statisticians might say.
And this is hugely inconvenient for those holding aloft Capital in the Twenty-First Century as though it were the Statistical Abstract of the United States—because that would mean all of Piketty’s policy proposals and dire predictions for the future are based on a guess about the future, a guess he has falsely portrayed as an immutable law.
Four: Piketty’s Faith
Appeals to scientific fact are powerful only if the science holds up. The problem is that Piketty’s whole case sits on what could be called a one-legged stool: Remove that leg and there’s nothing left to hold it up but faith. Marxism suffered from a similar weakness. So long as its “scientific” claims remained uncontested and unexamined, Marxism had a huge advantage. Once it became clear that the science in “scientific socialism” was nothing more than clever branding, all that was left was faith.
The radical philosopher Georges Sorel (1847–1922) recognized that Marx’s Das Kapital was next to useless as a work of scientific analysis. That’s why he preferred to look at it as an “apocalyptic text… as a product of the spirit, as an image created for the purpose of molding consciousness.” And for generations of revolutionaries, intellectuals, artists, and activists, it served that purpose well. Marxism lent to its acolytes a certainty they could call “scientific”—an indispensable label amidst a scientific revolution—but, as Sorel understood, that was a kind of psychological marketing, a Platonic “vital lie” or what Sorel called a useful “myth.” Indeed, Lenin’s most significant contribution to Marxism lay in using Sorel’s concept of the myth to galvanize a successful revolutionary political movement.
Marx tapped into the language and concepts of Darwinian evolution and the Industrial Revolution to give his idea of dialectical materialism a plausibility it didn’t deserve. Similarly, Croly drew from the turn-of-the-century vogue for (heavily German-influenced) social science and the cult of the expert (in Croly’s day “social engineer” wasn’t a pejorative term, but an exciting career). In much the same way, Piketty’s argument taps into the current cultural and intellectual fad for “big data.” The idea that all the answers to all our problems can be solved with enough data is deeply seductive and wildly popular among journalists and intellectuals. (Just consider the popularity of the Freakonomics franchise or the cult-like popularity of the self-taught statistician Nate Silver.) Indeed, Piketty himself insists that what sets his work apart from that of Marx, Ricardo, Keynes, and others is that he has the data to settle questions previous generations of economists could only guess at. Data is the Way and the Light to the eternal verities long entombed in cant ideology and darkness. (This reminds me of the philosopher Eric Voegelin’s quip that, under Marxism, “Christ the Redeemer is replaced by the steam engine as the promise of the realm to come.”)
For the lay reader of Capital, this might seem ironic, given Piketty’s own criticisms of the economics profession. He mocks his colleagues’ “childish passion for mathematics and for purely theoretical and often highly ideological speculation” and “their absurd claim to greater scientific legitimacy, despite the fact that they know almost nothing about anything.” He decries the “scientistic illusion” that emerges from statistical lightshows. “The new methods often lead to a neglect of history and of the fact that historical experience remains our principle source of knowledge,” he writes. It is true that the economists he’s talking about don’t deal with real-world data but with abstract mathematical models masquerading as economic theory. Nonetheless, he would be well advised to consider that towering trees of data can blind you to the more complex nature of the forest.
With almost the sole exception of left-wing Salon columnist Thomas Frank, virtually none of his reviewers—positive and critical alike—have commented on the fact that Piketty has a remarkably thumbless grasp of historical context. “Piketty’s command of American political history is, quite simply, abysmal,” Frank correctly declares. Many seem to have missed this because they are suckers for Piketty’s habit of using literary references to lend credence to his mathematical conclusions. In a section titled “the Rastignac’s dilemma,” Piketty highlights the plight of the penniless young noble Eugene Rastignac in Balzac’s Le père Goriot, who must choose between marrying a rich heiress or pursuing a mediocre and underpaid legal career. He then subjects us to long data-dissections showing that the return on inherited wealth outstrips the return on labor income and that we are destined to return to the “patrimonial capitalism” of 19th-century France. (Alain Bertaud of NYU makes a persuasive case that Piketty unfairly distorts the richness of Balzac’s character. His interpretation of Rastignac, Bertaud writes, “is so skewed that it seems that Piketty has been reading Balzac through inequality glasses.”)
Clearly some people go in for this sort of thing, but the weight of endless discursions into mathematical modeling and data collection can be lightened only so much by French Lit CliffsNotes. A spoonful of such sugar helps the medicine go down, but it doesn’t make Balzac’s famous dictum that “behind every great fortune is a great crime” any more valid.
Such techniques can also get an author into trouble. At times, it seems Piketty takes much of his early-20th-century history from the movie director James Cameron. He puts a good deal of stock in the historical value of Cameron’s 1997 blockbuster Titanic. At one point he says one need only note “that the dreadful [Cal] Hockney who sailed in luxury on the Titanic in 1912 existed in real life and not just in the imagination of James Cameron to convince oneself that a society of rentiers existed not only in Paris and London but also in turn-of-the-century Boston, New York, and Philadelphia.”
Well, no. In fact, the Billy Zane character was an entirely fictional creation of James Cameron’s imagination (and the proper spelling of his name is Hockley; Cameron invented Caledon Hockley’s name by joining the names of two towns in Ontario, where he spent some time in his youth). Still, let us concede that there were some rich jerks on the actual Titanic. So what? Many of the richest people on earth were passengers on the Titanic, including Isidor and Ida Strauss (owners of Macy’s), mining heir Benjamin Guggenheim, and John Jacob Astor IV (the wealthiest man on the ship). They, and numerous others, refused to get in lifeboats until all the women and children, including the poor women and children, got on first (Ida Strauss refused to leave her husband, preferring to die in his arms). After helping other passengers escape, Guggenheim and his secretary changed into their evening wear, saying they were “prepared to go down like gentlemen.” Meanwhile the most famous real-life cad on the ship was George Symons, a crewman who refused to let anyone else on his lifeboat even though there were 28 empty seats. Money, it seems, doesn’t tell you everything about a man.
This Titanic business on its own is trivial, but it demonstrates how Piketty sees the super rich as an undifferentiated agglomeration—a single static class bent on protecting its own collective self-interests. But the rich are not a static class, any more than capital can be reduced to a homogenous blob. Fewer than 1 in 10 of the 400 wealthiest Americans on the Forbes list in 1982 were still there in 2012. (Lawrence Summers notes that if Piketty was right about the stable return on capital, they should have all stayed on the list.) Of the 20 biggest fortunes on the Forbes list in 2013, 17 (85 percent) were self-made. Of the three remaining entries, only one—the Mars candy family—goes back three generations. The Koch brothers inherited the business their father created, but they also greatly expanded it through their own entrepreneurial zeal. The Waltons of Walmart fame inherited the family business from Sam Walton, a self-made billionaire from quite humble origins.
Nor are the poor and the middle class static. As a statistical artifice, there will always be a bottom 1 percent, just as there will always be a top 1 percent. But that doesn’t mean that if you are born in the bottom 1 percent, you will stay there. Some of Piketty’s fans seem confused about this, appearing to believe that economic inequality is synonymous with low economic mobility. There may indeed be a link between inequality and low economic mobility. After all, rich people by definition have advantages poor people do not. But there is no iron law that says any individual person must stay in his narrow economic bracket for life; the Morlocks can become Eloi. Indeed, there remains an enormous amount of churn in our economy; 61 percent of households will find themselves in the top quintile of income for at least two years, according to data compiled by economists Mark Rank and Thomas Hirschl. Just under 40 percent will reach the top 10 percent, and 5 percent will be one-percenters, at least for a while.
Piketty himself offers an extensive analysis of the Forbes list of the wealthiest people in the world in an attempt to prove that today’s richest people are much richer than they were in 1987 and that the “largest fortunes grew much more rapidly than average wealth.” He says the data show that wealth grew by an inflation-adjusted 7 percent, even higher than the normal 4-to-5 percent return implicit in r > g. In what seems a generous nod, Piketty even concedes that if you jigger the timespan—starting from, say, 1990 instead of 1987—the rate of return might drop a bit. But one problem remains: Piketty leaves out that the people on the list are almost all different people.3 The economist Stan Veuger, writing for U.S. News & World Report, looked at the same list and found that the top 10 individuals collectively earned about 0.5 percent on their capital during the period Piketty says “the rich” got richer. And, Vueger notes: “If it weren’t for Walmart, the wealthiest people in the world would actually have lost about half of their wealth in the last 25 years.”
Five: Piketty’s Warning
Piketty’s insistence that “historical experience remains our principal source of knowledge” and that economists need to get out of their abstract cocoons becomes all the more tone-deaf when we get to the question he barely addresses at all: Why should we care? So there’s income inequality. So what? For his part, Martin Wolf of the Financial Times raved about Capital, but conceded that the work has “clear weaknesses. The most important is that it does not deal with why soaring inequality…matters. Essentially, Piketty simply assumes that it does.”
The Economist’s Ryan Avent objected to Wolf’s criticism noting that Piketty finds income inequality “unsustainable” because it will either lead to a few (or even a single person) owning everything or to bloody revolution. Piketty does suggest as much—but he makes nothing resembling a sustained philosophical, historical, or ethical case to support his views. Rather, he breezily and unpersuasively assumes and asserts such conclusions as if they are the sorts of things everybody knows. Avent’s ultimate defense of Piketty is revealing in this regard: “Inequality matters because, like it or not, inequality matters.” The examples Piketty gives to explain why he thinks inequality—not poverty, not hunger, not disease, not human rights, not expanding liberty—should be the “defining challenge of our time” don’t make the argument any more convincingly than Avent does.
Piketty makes a great deal out of a platinum-mine strike in South Africa in 2012. That violent labor action was, in a sense, about inequality, since it featured poor workers attacking rich mining interests. The workers, it is true, did want better wages, and justifiably so. But they wanted better wages not because they were indignant about their salaries being too small a fraction of the profits; they felt they deserved better compensation for putting up with undeniably horrible working conditions. Piketty’s glib summation leaves out other important variables outside mere economics. For starters, most of the violence was attributable to a vicious and bloody rivalry between competing labor unions. There’s also what economists might call the legacy costs of this exogenous event called “apartheid.”
This distinction between objective, or absolute, poverty and subjective, or relative, poverty doesn’t matter very much to Piketty. In real life, however, it matters a great deal. There’s a significant difference between not being able to feed your family and not being able to feed your family as well as a wealthier man might. A millionaire might be poor in a world of billionaires, but he would not be a pauper.
Of course, America has poor people, though it has relatively few who go hungry because capitalism has failed them. The average poor person in America, in material terms, lives quite well in comparison with a poor person elsewhere in the world or the average American else-when in time. The “actual living conditions of people counted as living ‘in poverty’ in America today,” Nicholas Eberstadt recently explained in the Weekly Standard, “bear very little resemblance to those of Americans enumerated as poor in the first official government count attempted in 1965.” He continued:
By 2011, for example, average per capita housing space for people in poverty was higher than the U.S. average for 1980, and crowding (more than one person per room) was less common for the 2011 poor than for the nonpoor in 1970. More than three-quarters of the 2011 poor had access to one or more motor vehicles, whereas nearly three-fifths were without an auto in 1972–73. Refrigerators, dishwashers, washers and dryers, and many other appliances were more common in officially impoverished homes in 2011 than in the typical American home of 1980 or earlier. Microwaves were virtually universal in poor homes in 2011, and DVD players, personal computers, and home Internet access are now typical in them—amenities not even the richest U.S. households could avail themselves of at the start of the War on Poverty. Further, Americans counted as poor today are manifestly healthier, better nourished (or overnourished), and more schooled than their predecessors half a century ago.
That is the sort of historical context one would expect from an economist who claims to be interested in historical context. There’s nothing like that here. Instead, Piketty glibly segues from the South African strikers to the 1886 Haymarket Square riots in Chicago, asking: “Does this kind of violent clash between capital and labor belong to the past, or will it be an integral part of twenty-first century history?”
There’s not sufficient space here to get into the remarkably complex issues that were involved in the Haymarket Square affair. Suffice it to say that neither the initial protest (for an eight-hour work day) nor the bombing (by anarchists) that led to the deaths of seven policemen and four civilians were motivated by dissatisfaction over income inequality. What mattered were the objective conditions under which the protesting workers toiled and lived. They were earning too little to make a proper life for themselves and were forced to labor under intolerable conditions at the same time. As Thomas Frank notes, such conditions were ameliorated largely through the struggle for organized labor—a rich subject that holds almost no interest for Piketty.
Similarly, coal miners in early-20th-century America didn’t march to flatten out the curve of the distribution of income in a textbook’s appendix; they marched to work in safety and security, and for a decent wage. To the extent that coal miners are politically organized today, it is not to fight the alleged evils of income inequality but to keep Piketty’s fans on the left from erasing their jobs by waging the “war on coal.” (By the way, the average wage for coal miners in the United States is just over $81,000 per year.)
Six: Piketty’s Threat
Piketty is convinced that income inequality “inevitably instigates…violent political conflict.” Is that actually true? And if it is, is such violence justified? Skepticism is warranted on both counts, as history suggests.
For example, the French Revolution was about inequality, but not first and foremost economic inequality. Inherited titles, the power of the Church, the unjust rule of what Edmund Burke called “arbitrary power,” and other tangible examples of legal or formal inequality played enormous and mutually reinforcing roles. The American Revolution, likewise, was about political inequality, as were later fights in this country over abolition and civil rights. Economic inequality was a symptom, not the disease—at least according to countless revolutionaries, abolitionists, and civil-rights leaders.
The postwar history of the West actually makes a hash of Piketty’s sweeping presumption. He argues that the years 1950 to 1970 were a “golden age” of economic equality. If so, why did the greatest period of social unrest in Europe and the United States in the 20th century come at the height of this golden age in the 1960s? That unrest spilled over into the 1970s, but the domestic terrorists who roiled Germany and Italy and the crime wave that devastated the United States had an extremely tangential relationship to income inequality at best. Then, pollsters tell us, in the 1980s—when the West took a wrong turn, according to Piketty, thanks to the policies of Margaret Thatcher and Ronald Reagan—social contentment started to rise and continued to rise, with the usual dips, all the way into the 1990s. One small example: In 1979, 84 percent of Americans told Gallup they were dissatisfied with the direction of the country. In 1986, 69 percent were satisfied.
So, just looking at the historical record, the notion that greater income equality by itself yields social peace seems insane.
Seven: Piketty’s Capitalism
“The consequences for the long-term dynamics of the wealth distribution are potentially terrifying,” Piketty writes. For instance, Piketty fears that whenever the return on capital really starts to outstrip national growth, “capitalism automatically generates arbitrary and unsustainable inequalities that radically undermine the meritocratic values on which democratic societies are based.” That is open to debate, to put it mildly. Bill Gates, Sam Walton, Larry Ellison, Mark Zuckerberg, Sergey Brin, Fred Smith, and others became billionaires because they created goods and services of real value to consumers; there was nothing “arbitrary” about it. In fact, most of them didn’t achieve their wealth, strictly speaking, from “capital” in the Pikettyesque sense at all. They mostly earned it from technological innovation. Piketty seems to believe, without marshaling much if any evidence, that such accretions of wealth undermine meritocratic values—when in fact, in a very real sense, the wealth creation over the past 30 years collectively constitutes the most extreme example of meritocratic advancement the world has ever seen.
Do the masses resent their wealth? It doesn’t appear so, or if they do, it is not a major concern. As inequality has risen over the last 30 years, the share of the public who think that that the “rich are getting richer and the poor are getting poorer” has stayed fairly constant (80 percent told Harris pollsters they agree with that statement in 2013 compared with 82 percent in 1990). The number dipped a bit in the 1990s when inequality was increasing but wages were rising. But, in May, when Gallup asked voters what they saw as “the most important problem facing this country today,” a mere 3 percent volunteered the gap between rich and poor (which gives you a sense of how out of touch with the concerns of Americans some of Piketty’s biggest fans are and why, for instance, they wildly overestimated the significance of Occupy Wall Street at the time, and even in retrospect). Polls consistently find that Americans are much more concerned about creating jobs and making the economy grow than fighting income inequality or redistributing wealth. A poll in January conducted by McLaughlin & Associates (for the YG Network) found that Americans by a margin of 2:1 (64 percent to 33 percent) prefer expanding economic growth to narrowing the gap between rich and poor. In 1990, Gallup asked Americans whether the country benefits from having a class of rich people. Sixty-two percent said yes. In 2012, 63 percent said yes.
It seems that most Americans simply want a fair shake. They don’t really begrudge the success of others, and to the extent they do, they don’t want to do much about it. It’s hard to see how any of this amounts to an inequality-driven powder keg of social unrest waiting to explode.
A third claim—one can’t call them arguments because they don’t rise to that level—is that the super rich will rig democracy to their advantage. This, too, has a faint Marxist echo, featuring as it does the assumption that capitalist overlords form a homogenous political class bent on exploitation. One must only read the newspaper to know that this is nonsense on stilts. At this very moment, George Soros, Tom Steyer, and other liberal billionaires are in a hammer-and-tongs political battle with Sheldon Adelson, Charles and David Koch, and other conservative or libertarian billionaires. And the evidence that either side has the power to buy elections is discredited almost every November. This is not to say that our democracy couldn’t be healthier or that wealthy special interests do not cause real problems, but America is hardly being run today by characters out of a Thomas Nast cartoon. It’s being run, instead, by the son of a teenage single mother from Hawaii, the son of a barkeep from Ohio who became speaker of the House, and a miner’s son from Nevada who grew up in a shack with no running water before becoming majority leader of the Senate—none of them born into wealth, to put it mildly.
Eight: Piketty’s Choice
Piketty is shockingly unconcerned with the fact (which he acknowledges) that one of the driving forces of U.S. income inequality is rising global equality. The world’s poor are getting much richer, in large part because they are doing a lot of the sometimes backbreaking and manual labor that poor and middle-class people in rich countries once did. This clearly creates significant political and economic challenges for wealthy countries eager to maintain high domestic-living standards, but from the vantage point of someone who believes in universal economic rights, that is a small price to pay, no?
Thanks to capitalism, we have seen the single largest alleviation of poverty in human history. In 1981, 52 percent of humanity lived in “extreme poverty.” They could not provide for themselves and for their families such basic needs as housing and food. According to a recent study by Yale and the Brookings Institution, by the end of 2011, that number had fallen to 15 percent. They credit globalization, capitalism, and better economic governance (i.e., the abandonment of Marxism and similar ideologies). Even for economic nationalists, how is that not a staggering triumph for the ethical superiority of capitalism?
That is also the story of the West in the 19th and 20th centuries. Piketty might be right that whenever capitalism runs amok, the rich get richer faster than the poor get richer. Even so, the poor still get richer. The economic historian Deirdre McCloskey beautifully chronicles how for nearly all of history (and prehistory), the average human lived on the equivalent of $3 per day. What she calls the “great fact” of human advancement is that, thanks to the rise of democratic capitalism, that small figure no longer holds wherever democratic capitalism has been permitted to work its magic.
Even more troubling, Piketty places enormous emphasis on the role of the world wars as a great leveler of inequality and the primary driver of the postwar “golden age.” But ask yourself a question: If you were a remotely sane human in 1900 and you were given the choice of
(a) getting richer, though at a slower rate than the very wealthiest, so that in 1950 there was a lot of economic inequality but you and your kids were still much better off; or
(b) facing two horrendous and cataclysmic global wars in which whole societies were razed and a hundred million people died violently and you (along with the rich) were made poorer for it, and would die at a younger age,
What would you have chosen? It appears Piketty finds Option B awfully tempting. And that is madness.
Nine: Piketty’s Justice
In little more than a few throwaway sentences, Piketty asserts that confiscatory taxes on wealth are morally required as a matter of social justice. That an economist who has ensconced himself in the Parisian velvet of the social-democratic left for nearly all of his adult life believes such things is hardly surprising, particularly given his confidence that extreme wealth is essentially the arbitrary product of an “ideological construct.”
But this does not absolve him of the responsibility of making a case.
Piketty begins Capital in the Twenty-First Century with a quotation from the Declaration of the Rights of Man, the operating document of the French Revolution: “Social distinctions can be based only on common utility.” He concedes elsewhere in the book that the “social distinctions” to which it refers had to do with the hereditary “orders of privileges of the Ancien Regime” and not with economic inequality. Even so, he insists, we must breathe new life into the concept of “common utility”:
One can interpret the phrase more broadly, however. One reasonable interpretation is that social inequalities are acceptable only if they are in the interest of all and in particular of the most disadvantaged social groups. Hence basic rights and material advantages must be extended insofar as possible to everyone, as long as it is in the interest of those who have the fewest rights and opportunities to do so.
The notion that wealth—or, to put it another way, private property—is an arbitrary social distinction that can be erased for the betterment of the have-nots is incredibly radical. One might even call it Marxist (or at least “Marxy”). Given that, an argument on its behalf should be extended and defended. But aside from a perfunctory reference to the philosopher John Rawls’s “difference principle,” which says that justice should be weighted toward the least advantaged people in society, he does not do so. He is more than comfortable letting it sit as largely self-evident.
Where he breaks with Marxism is the means by which he would reward the have-nots: not the seizure of all property but the mere soaking of the rich in order to seize the returns on the means of production. Piketty’s obsession with tax hikes as a cure-all is almost a perfect mirror of how liberals see the supply-side obsessions with tax cuts. It is this idée fixe that allows him to summarily dismiss other proposals that might get us to his preferred destination without confiscating the ill-gotten gains of the well-to-do. For instance, Tyler Cowen and National Review’s Kevin D. Williamson point out that if Piketty’s assumptions about the long-term returns on capital are correct, then we would be crazy not to transform social security into a system of privately held investment accounts. Boldly expanding the Earned Income Tax Credit—which would necessarily increase the tax burden of the wealthy—might also do more to solve the problem, assuming it is a problem. An aggressive tax on consumption instead of income would, according to many economists, boost growth and have the added benefit of taxing the Gilded Age lifestyles of billionaires instead of merely taxing billionaires for the alleged crime of existing. But none of these has the satisfying bang of that 80 percent marginal tax rate—or, even more thrilling, the 10 percent “global tax” on billionaires’ filthy lucre.
And then, of course, there are the countless reforms that lie outside the realm of tax tables. The data are clear that marriage delivers roughly as much bang for the buck as going to college. Raising children in a stable two-parent home is a better guarantor of lifetime economic success than crude interventions by the state. But while Piketty is happy to opine at great length about the Gilded Age matrimonial lifestyles of the rich and famous, drawing deeply on Jane Austen and other sources to paint a vivid picture, he is uninterested in the same issues down the socioeconomic ladder.
Ten: Piketty’s Class
Why does Piketty reject the more romantic path of the classic Marxist? You know—“Let the ruling classes tremble at a Communistic revolution. The proletarians have nothing to lose but their chains. They have a world to win”—that kind of thing?
One answer to this question explains not only Piketty’s thinking but the response to his work as well: Piketty is a member of the ruling class. Piketty’s way puts Piketty and his friends in charge of everything. A one-time adviser to the Socialist politician Ségolène Royal, a star academic and a columnist for Libération, Piketty is a quintessential member of what the economist Joseph Schumpeter identified as the “new class.” Schumpeter’s prediction of capitalism’s demise hinged on his brilliant insight that capitalism breeds anti-capitalist intellectuals. Educators, bureaucrats, lawyers, technocrats, journalists, and artists, often the children of successful capitalists, always raised in the material affluence of capitalism, would organize to form a class whose collective interest lay in seizing economic decisions from the free market. As Deirdre McCloskey writes: “Schumpeter believed that capitalism was raising up its own grave diggers—not in the proletariat, as Marx had expected, but in the sons of daughters of the bourgeoisie itself. Lenin’s father, after all, was a high-ranking educational official, and Lenin himself a lawyer. It wasn’t the children of auto workers who pulled up the paving stones on the Left Bank in 1968.” No, it was actually people like Piketty’s own parents.
There is a reason the most passionate foes of income inequality tend to be very affluent but not super rich, intellectuals like Paul Krugman and other journalists eager to set the threshold for confiscatory tax rates just beyond their own income levels. But this sort of class war—the chattering classes versus the upper classes—is only part of the equation. Power plays a huge part as well. A full-throated endorsement of classic leftist radicalism would set a torch to Piketty’s own tower of privilege. The State, guided by experts, informed by data, must be empowered to decide how the Rawlsian difference principle is applied to society. Piketty’s assurance that inequality “inevitably” leads to violence amounts to an implied threat: “Let us distribute resources as we think best, or the masses will bring the fire next time.” Once again the vanguard of the proletariat takes the most surprising form: bureaucrats (the true “rentiers” of the 21st century!). A revealing sub-argument running throughout Capital is that we need to tax rich people in ever more, new, and creative ways just so we can get better data about rich people! To borrow a phrase from James Scott, author of Seeing Like a State, Piketty is obsessed with making society more “legible.” The first step in empowering technocrats is giving them the information they need to do their job.
This is what places the Piketty phenomenon squarely in the tradition of Croly and, yes, Marx himself. Piketty’s argument, with its scientific veneer and authoritative streams of numbers, is a warrant to empower those who think they are smarter than the market—and who feel superior to those most richly rewarded by it.
1 As fate would have it, Piketty’s translator, Arthur Goldhammer, is also Tocqueville’s, and he assures us that the comparison is apt. “Because Tocqueville was such an assiduous researcher, who returned from his travels in the U.S. with trunkloads of documents filled with statistical data of all kinds, I have no doubt he would have found the data compiled by Thomas Piketty fascinating.” That may well be true, but it’s also irrelevant. Tocqueville probably found Plato fascinating, but that doesn’t mean he was a “19th-century Plato.”
2 Chief among them is New York Times columnist Paul Krugman, who has been leading an intellectual jihad on inequality for years and has written dozens of columns and articles explaining why we live in a new “Gilded Age.” Hence the unsurprising title of his lavish review in the New York Review of Books: “Why We Live in a New Gilded Age.”
Around the time he wrote the review, Krugman wrote on his New York Times blog that research showing that both liberals and conservatives suffer from confirmation bias—according to which one tends to believe facts supportive of one’s own worldview—didn’t ring true to him. After all, he mused, he couldn’t think of a major policy issue where he and his liberal friends hadn’t been right about everything.
3 By way of illustration, the super rich of 1987—Yoshiaki Tsutsumi, Taikichiro Mori, Sam Walton, Shigeru Kobayashi, Haruhiko Yoshimoto, Salim Ahmed Bin Mahfouz, Hans and Gad Rausing, Yohachiro Iwasaki, Kenneth Roy Thomson, and Paul, Albert, and Ralph Reichmann—are hardly household names today.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Mr. Piketty’s Big Book of Marxiness
Must-Reads from Magazine
f all the surprises of the Trump era, none is more notable than the pronounced shift toward Israel. Such a shift was not predictable from Donald Trump’s conduct on the campaign trail; as he sought the Republican nomination, Trump distinguished himself by his refusal to express unqualified support for Israel and his airy conviction that his business experience gave him unique insight into how to strike “a real-estate deal” to resolve the Israeli–Palestinian conflict. In addition, his isolationist talk alarmed Israel’s friends in the United States and elsewhere if for no other reason than that isolationism, anti-Zionism, and anti-Semitism often go hand in hand in hand.
But shift he did. In the 14 months since his inauguration, the new president has announced that the United States accepts Jerusalem as Israel’s capital and has declared his intention to build a new U.S. Embassy in Jerusalem, first mandated by U.S. law in 1996. He has installed one of his Orthodox Jewish lawyers as the U.S. ambassador and another as his key envoy on Israeli–Palestinian issues. America’s ambassador to the United Nations has not only spoken out on Israel’s behalf forcefully and repeatedly; Nikki Haley has also led the way in cutting the U.S. stipend to the refugee relief agency that is an effective front for the Palestinian terror state in Gaza. And, as Meir Y. Soloveichik and Michael Medved both detail elsewhere in this issue, his vice president traveled to Israel in January and delivered the most pro-Zionist speech any major American politician has ever given.
Part of this shift can also be seen in what Trump has not done. He has not signaled, in interviews or in policy formulations, that the United States views Israeli actions in and around Gaza and the West Bank as injurious to a future peace. And his administration has not complained about Israeli actions taken in self-defense in Lebanon and Syria but has, instead, supported Israel’s right to defend itself.
This marks a breathtaking contrast with the tone and spirit of the relationship between the two countries during the previous administration. The eight Obama years were characterized by what can only be called a gut hostility rooted in the president’s own ideological distaste for the Jewish state.
The intensity of that hostility ebbed and flowed depending on circumstances, but from early 2009, it kept the relationship between the United States and Israel in a condition of low-grade fever throughout Barack Obama’s tenure—never comfortable, never easy, always a bit off-kilter, always with a bit of a headache that never went away, and always in danger of spiking into a dangerous pyrexia. That fever spike happened no fewer than five times during the Obama presidency. Although these spikes were usually portrayed as the consequences of the personal friction between Obama and Israeli Prime Minister Benjamin Netanyahu, that friction was itself the result of the ideas about the Middle East and the world in general Obama had brought with him to the White House. In this case, the political became the personal, not the other way around.
Given the general leftish direction of his foreign-policy views from college onward, it would have been a miracle had Obama felt kindly disposed toward the Jewish state’s own understanding of its tactical and strategic condition. And Netanyahu spoke out openly and forcefully to kindly disposed Americans—from evangelical Christians to congressional Republicans—about the threats to his country from nearby terrorism and rockets, and a developing nuclear Iran 900 miles away. His candor proved a perpetual irritant to a president whose opening desire was to see “daylight” (as he said in February 2009) between the two countries. Obama caused one final fever spike as he left office by refusing to veto a hostile United Nations resolution. This appeared churlish but was, in fact, Obama allowing himself the full rein of his true and long-standing convictions on his way out the door.T
he things Trump both has and has not done should not seem startling. They constitute the baseline of what we ought to expect one ally would say and not say about the behavior of another ally. But as Obama’s disgraceful conduct demonstrated, Israel is not just another ally and never has been. It is a unique experiment in statehood—a Western country on Mideast soil, born from an anti-colonialist movement that is now viewed by many former colonial powers as an unjust colonial power, created by an international organization that is now largely organized as a means of expressing rage against it.
Historically, American leaders have had to reckon with these unique realities—and the fact that the hostile nations surrounding Israel and hungering for its destruction happen to sit atop the lifeblood of the industrial economy. The so-called realists who claim to view the world and the pursuit of America’s interests through cold and unsentimental eyes have experienced Israel mostly as a burden.
Through many twists and turns over the seven decades of Israel’s existence, they have felt that America’s support for Israel is mostly the result of short-sighted domestic political concerns for which they have little patience—the wishes of Jewish voters, or the religious concerns of evangelical voters, or post-Holocaust sympathy that has required (though they would never say it aloud) an unnatural suspension of our pursuit of the American national interest.
Israel created problems with oil countries, and with the United Nations, and with those who see the claims for the necessity of a Jewish state as a form of special pleading. As a result, the realists have spent the past seven decades whispering in the ears of America’s leaders that they have the right to expect Israel to do things we would not expect of another ally and to demand it behave in ways we would not demand of any other friendly country.
The realists and others have spent nearly 50 years propounding a unified-field theory of Middle East turmoil according to which many if not all of the region’s problems are the result of Israel’s existence. Were it not for Israel, there would not have been regional wars in 1956, 1967, 1973, and 1982—no matter who might have borne the greatest degree of responsibility for them. There would have been other conflicts, but not this one. There would have been no world-recession-inducing oil embargo in 1973 because there would have been no response to the Yom Kippur War. Were it not for Israel, for example, there would be no Israeli–Palestinian problem; there would have been some other version of the problem, but not this one.
Unhappiness about the condition of the Palestinians in a world with Israel was held to be the cause of existential unhappiness on the Arab street and therefore of instability in friendly authoritarian regimes throughout the Middle East. Meanwhile, Israel’s own pursuit of what it and its voting populace took to be their national interests was usually treated with disdain at the very least and outright fury at moments of crisis.
It was therefore axiomatic that the solution to many if not most of the region’s problems ran right though the center of Jerusalem. It would take a complex process, a peace process, that would lead to a deal—a deal no one who believed in this magical process could actually describe honestly and forthrightly or give a sense as to what its final contours would be. If you could create a peace process leading to a deal, though, that deal itself would work like a bone-marrow transplant—through a mysterious process spreading new immunities to instability in the Middle East that would heal the causes of conflict and bring about a new era.
Again, this was the view of the realists. With Israel’s 70th anniversary coming hard upon us, the question one needs to ask is this: What if the realists were nothing but fantasists? What if their approach to the Middle East from the time of Israel’s founding was based in wildly unrealistic ideas and emotions? Central to their gullibility was the wild and irrational idea that peace was or ever could be the result of a process. No, peace is a condition of soul, an exhaustion from the impact of conflict, born of a desire to end hostilities. Only after this state is achieved can there be a workable process, because both parties would already have crossed the Rubicon dividing them and would only then need to work out the details of coexistence.
There was no peace to be had. The Arab states didn’t want it. The Palestinians didn’t want it. The Israelis did and do, but not at the expense of their existence. The Arabs demanded concessions, and the Israelis have made many over the years, but they could not concede the security of the millions of Israel’s citizens who had made this miracle of a country an enduring reality. The realists fetishized “process” because it seemed the only way to compel change from the outside. And so Israel has borne the brunt of the anger that follows whenever a fantasist is forced to confront a reality he would rather close his eyes to.
That is why I think what Trump and his people have done over the past 14 months represents a new and genuine realism. They are dealing with Israel and its relationships in the region as they are, not as they would wish them to be. They are seeing how the government of Egypt under Abdel Fattah el-Sisi is making common cause with Israel against the Hamas entity in Gaza and against ISIS forces in the Suez. They are witness to the effort at radical reformation in Saudi Arabia under Muhammad bin-Salman—and how that seems to be going hand in hand with an astonishing new concord between Israel and the Desert Kingdom over the common threat from Iran. This is a harmonizing of interests that would have seemed positively science-fictional in living memory.
Mostly, what they are seeing is that an ally is an ally. Israel’s intelligence agencies are providing the kind of information America cannot get on its own about Syria and Iran and the threat from ISIS. Israel is a technological powerhouse whose innovations are already helping to revolutionize American military know-how. Israel’s army is the strongest in the world apart from the regional superpowers—and the only one outside Western Europe and the United States firmly locked in alliance with the West. Things are changing radically in the Middle East, and as the 21st century progresses it is possible that Israel will play a constructive and influential role outside its borders in helping to maintain and strengthen a Pax Americana.
Donald Trump is a flighty man. All of this could change. But for now, the replacement of the false realism of the past with a new realism for the 21st century seems like a revolutionary development that needs to be taken very, very seriously.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
f the making of Washington movies, there is no end. Kohelet said this in Ecclesiastes, I think. Or maybe it was Gene Shalit on the Today Show. It’s a truism in any case. Steven Spielberg’s latest entry in the genre, The Post, is for many Washingtonians the most powerful example in the long line. When the movie opened here in late December, there were reports of audiences cheering lustily and even dissolving in tears at the movie’s end, as if they were watching a speech by President Obama. The local paper ran news articles about it, along with numberless feature stories, interviews, op-eds, fact-checks, reviews, and reviews of reviews.
Which is excusable, I guess, since the movie is about the Washington Post. But then The Post is supposed to be about so many things. It’s about the First Amendment, depicting the agonies of the Post’s editor Ben Bradlee, and its owner, Katherine Graham, as they defy the Nixon administration to publish the top-secret Pentagon Papers. It’s about feminism and the personal evolution of Mrs. Graham from an insecure Georgetown socialite to Master of the Boardroom. It’s the story of the lonely courage of the leaker/whistleblower/traitor (your call) Daniel Ellsberg. It is also, so I read in the Post, a warning about the imperial designs of President Trump to smother a free press. And it’s been understood as a straightforward tale of political history, though the liberties Spielberg takes with his based-on-a-true-story are so extreme as to render it useless as a guide to what happened in the summer of 1971.
Running beneath it all is the motive that animates so many Washington movies: an impatience with the stuttering, halting processes of self-government. The wellspring from which the Washington movie flows is Frank Capra’s Mr. Smith Goes to Washington. The plot is familiar to everyone. Mr. Smith, a small-town bumpkin played by Jimmy Stewart—talk about stuttering and halting!—is appointed by sinister political bosses to a vacant Senate seat, on the assumption that he will be easily manipulated, like a movie audience. Instead, Smith stumbles upon an illicit land deal and exposes the Senate as a den of thieves. His filibustering floor speech rouses a populist outpouring from an army of alarmingly cute children. By the end of the movie, Mr. Smith has restored the nation to its democratic ideals.
Capra intended his movie to be a hymn to those ideals, and for nearly 80 years that’s what audiences have taken it to be. It is no such thing. Mr. Smith seethes with contempt for the raw materials of democracy: debate, quid pro quo deal-making, back-scratching compromise—all the tedious, unsightly mechanics that turn democratic ideals into functioning self-government. In Capra’s telling, democracy can be rescued only by anti-democratic means. An appointed charismatic savior (he’s not even elected!) uses a filibuster (favorite parliamentary trick of bullies and autocrats) to release the volatile pressure of a disenfranchised mob (the great fear of every democratic theorist since Aristotle). From Mr. Smith to Legally Blonde 2, the point of the Washington movie is clear: Left to its own devices, without an outside agent to penetrate it and cleanse it of its sins, self-government sinks into corruption and despotism.
Steven Spielberg is the closest thing we have to Capra’s successor. Like all his movies, The Post has many charms: a running visual joke about Bradlee’s daughter making a killing with her lemonade stand threads in and out of the heavier moments like a rope light. On the other hand, his painstaking obsession with period detail often fails: A hippie demonstration against the Vietnam War looks as if it’s been staged by the cast of Hair. The set-piece speeches are insufferable, an icky glue of sanctimony and sentimentality. What we call the Pentagon Papers was a classified history of the lies, misjudgments, and incompetence of four presidents, from Harry Truman to Lyndon Johnson, ending in 1968. Sometimes the speechifying is directed at the malfeasance of these men, as when Bradlee bellows: “The way they lied—those days have to be over!”
Weirdly, though, the full force of the movie’s indignation is aimed at Richard Nixon. Historians might point out that Nixon wasn’t even president during the period covered by the Pentagon Papers. Intelligence officials told the president that the release of the papers would pose an unprecedented threat to national security. He ordered the Justice Department to sue to prevent the New York Times and the Post from publishing the top-secret material. In the movie’s account, this ill-judged if understandable response is equivalent to the official, strategic lies that accompanied tens of thousands of American soldiers to their deaths.
A particularly rich moment comes when Robert McNamara warns Mrs. Graham about Nixon’s capacity for evil. As Kennedy and Johnson’s defense secretary, McNamara was an early version of Saturday Night Live’s Tommy Flanagan, Pathological Liar: The Viet Cong are on the run! Yeah, sure, that’s the ticket! As much as anyone, McNamara, with his stupidity and dishonesty, guaranteed the tragedy of Vietnam. And yet here he is, issuing a clarion call to Mrs. Graham. “Nixon will muster the full power of the presidency, and if there’s a way to destroy you, by God, he’ll find it!” Later Bradlee compares Nixon to his predecessors: “He’s doing the same thing!”
Um, no. From his inauguration in 1969 onward, Nixon’s every move in Vietnam was intended to extricate the U.S. from the quicksand previous presidents had led us (and him) into. In this case, if in no other, Nixon was the good guy. He had nothing to lose, personally, from the publication of the Pentagon Papers, and maybe a lot to gain. After all, they demonstrated the villainy of his predecessors, not his own. (That came later.)
Yet the movie can’t entertain the possibility that Nixon could act on anything but the basest motives. He is a sinister presence. We see him through the Oval Office window, always alone, with his back turned, stabbing the air with a pudgy finger and cursing the Washington Post to subordinates over the phone. It’s actually Nixon’s voice in the movie, taken from the infamous tapes. Unfortunately, the actor’s movements don’t synchronize with the words; in such a somber thriller, the effect is inadvertently comic. It reminded me of watching the back of George Steinbrenner’s head in Seinfeld while Larry David spoke the Yankee owner’s dialogue. And Nixon was no Steinbrenner.
The most plausible explanation is that Nixon, in trying to stop publication of the Pentagon Papers, was doing what he said he was doing: his job. American voters had elected him to protect national security and, not incidentally, the prerogative of the president and the federal government to determine how best to protect it, including determining whether sensitive information should be kept secret. If he didn’t do his job the way voters wanted him to, they could get rid of him next time. You know, like in a democracy.
Ben Bradlee, Katherine Graham, and Stephen Spielberg, not to mention those teary audiences, have no patience with such niceties. As it happens, in the end, the Pentagon Papers were a bust. The sickening detail they disclosed deepened but did not broaden the historical record, and by all accounts their impact on national security was negligible. Those facts don’t alter the creepiness of The Post’s premise—that the antagonists of an elected regime are allowed to go outside the law when it suits their view of the national interest. Charismatic saviors (and few people were more charismatic than Ben Bradlee) can save democracy from itself, but only by ignoring the requirements of democracy. Spielberg continues the tradition of the Washington movie. The Post is Capraesque—in the only true sense of the word.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Is Harvard assaulting the rights of students to free association in the name of a diversity standard it doesn’t live up to itself?
arvard College is home to six all-male “final clubs.” Their members have access to houses in which they eat, socialize, and form bonds with their fellows. These clubs are as historic as they are renowned; most were formed in the 19th century and have had Kennedys, Roosevelts, and an endless procession of politicians, writers, and businessmen as former members. From the time of their origination, these exclusive institutions have been an object of fascination. When doors are closed, and only a small, elite group selected from an already hyper-elite campus has been invited inside, jealousy, curiosity, and frustration are sure to prevail.
The final clubs are financially independent from Harvard and have been entirely unaffiliated with the university since the 1980s, when the administration and the clubs clashed over the latter’s refusal to admit women. But that conflict, which had cooled over time, has recently resurfaced in a new and heightened manner.
In March 2016 Rakesh Khurana, the dean of Harvard College, set an April 15 deadline for the final clubs, at which time they were to inform the administration whether they would change course and become co-ed. Two forces drove Khurana’s action. The first was a report by Harvard’s Task Force on Sexual Assault Prevention released days earlier, after years of research. The report indicated that students who were involved with the final clubs were significantly more likely to have experienced some form of assault than those who were not. The second impetus was the administration’s position that the final clubs—and the ways in which they screened members—were in direct conflict with the ethos of the university.
The deadline passed without response from the clubs. On May 6, 2016, Dean Khurana wrote a letter to Harvard President Drew Faust. He proposed that, beginning with incoming freshmen who would matriculate in the fall of 2017, students who became members of what he termed “unrecognized single-gender social organizations” should be ineligible for leadership positions in Harvard organizations—meaning they could not serve as publication editors, captains of sports teams, leaders of theatrical troupes, and the like. And they would also be ineligible for letters of recommendation from the dean, necessary for many prestigious postgraduate opportunities such as the Rhodes and Marshall scholarships.
Khurana’s letter, and the sanctions proposed within, quickly became a cause célèbre. Harry R. Lewis, a professor of computer science and himself a former dean of the college, wrote Khurana a letter expressing his concern that “by asserting, for the first time, such broad authority over Harvard students’ off-campus associations, the good you may achieve will in the long run be eclipsed by the bad: a College culture of fear and anxiety about nonconformity.” Lewis went on to note:
The reliance on your judgement of what count[s] as Harvard’s values, and using that judgment to decide which students will receive institutional support, is a frightening prospect….The discretion exercised by the dean and his representatives will chill the activism of students in causes that might also be considered noncompliant with Harvard standards—for example, advocacy for a religion that does not allow women to be full participants, or a political party that opposes affirmative action. Such groups are excluded from your mandate, but only as a matter of your discretion. Why wouldn’t activism for such organizations color the support the College would offer their members, on the basis that such students are showing that their true colors are not pure Crimson?
Lewis also referenced the faculty’s responsibilities and noted that there was no precedent in Harvard’s Handbook for Students for the sanctions, thus suggesting that Khurana’s proposals might be outside the administration’s jurisdiction.
In September 2016, Khurana detailed the responsibilities of the “Single-Gender Social Organizations Implementation Committee.” The committee was tasked with
consulting broadly with the College community to address the following questions: 1) What leadership roles and endorsements are affected by the policy; 2) How organizations can transition to fulfill the expectations of inclusive membership practices; and 3) How the College should handle transgressions of the policy.
In addition to the committee’s work, the faculty went through several rounds of motions and debate, discussing myriad permutations of the sanctions, as well as the validity of the sanctions themselves.
In December 2017, the discussions came to a halt. Harvard’s administration flatly announced it would engage in sanctions against students who joined those “unrecognized single-gender social organizations,” or USGSOs. This ostensibly final decision has provoked renewed outrage from students, faculty, and alumni, who have grounded their varied objections in ethical, philosophical, and legal concerns.U
ntil the 1960s shattered the American elite consensus on such matters, the collegiate experience was vastly different for students. Universities used to view their role as being in loco parentis—serving in place of the parents from whom their charges had recently separated. Today, on Harvard’s enchanting campus, teenagers and twentysomethings tend to rule the roost. Students have tremendous flexibility in building their course schedules, and rare is the lecture professor who takes attendance. Undergraduates come and go as they please, to and from wherever they please, with whomever they please, from the darkest hours of the night to the earliest hours of the morning.
But from the time America’s colleges came into being in the 17th and 18th centuries until just a few decades ago, these institutions imposed rules and regulations, curtailed freedoms, and designed a microcosmic world in which young adults would—in theory—learn how to navigate the reality that awaited them after graduation. They were eased into the world in a setting that constricted their choices and where the powers that be very consciously, and intentionally, refrained from treating them like adults. This was most evident in the controls placed on contact between the sexes.
A 1989 Harvard Crimson article by Katherine E. Bliss detailed the so-called parietal rules of the 1960s. It noted that “in 1964, the primary goal of College administrators was maintaining ‘an open door and one foot on the floor’ policy for students entertaining guests of the opposite sex in their rooms.” At that time, the student body and the administration were in conflict over the right to do as they pleased in their own dorms: “Students in 1964 were concerned with lengthening the number of hours they were allowed to spend with members of the opposite sex in the privacy of their own rooms.” If this sounds quaint, consider Bliss’s next point. “Few,” she observed, “could appreciate the fact that only a decade earlier, men and women were not allowed to enter the dormitories of the opposite sex at all.”
The original parietal rules meant that the women of Radcliffe, Harvard’s sister college, could have been in the Harvard Houses only between the hours of 4 and 7 p.m. Robert Watson, a Harvard dean, explained at the time: “We have to watch the mores of our students. I do not want to see Harvard play a leading role in relaxing the moral code of college youth.” Indeed, he went on to say that “the college must follow the customs of the time and the community.…We cannot have rules more liberal than a standard generally accepted by the American public.”
Is there a single standard generally accepted by the American public today? For most of the country—with exceptions in deeply religious Jewish, Christian, and Islamic communities—ours is not an age that concerns itself with the amount of time that men and women spend together in solitude. But that doesn’t mean our era isn’t concerned with the moral development of our youth. On the contrary, leaders of America’s elite institutions today are as preoccupied with strengthening the souls of their charges as were the men who designed the parietal codes all those years ago. Only their aim is not sexual purity anymore, but rather social diversity. It is the heart and soul of the moral vision of our times, and administrators today are no less determined to see that students hew to that standard. But in their effort to serve in loco parentis in this fashion, educators are leaping across ethical—and possibly, legal—lines.
The fraternity-like final clubs have always been difficult to get into, much like Harvard itself. And for many years, the all-male final clubs were certainly characterized by discrimination. In a 1965 piece for the Crimson, Herbert H. Denton Jr., then an undergraduate, noted that while “the tacit ban on Jews has been relaxed in most clubs,” the “ban on Negroes is still in effect.” The same cannot be said today; while several of the final clubs are trying to retain their character by remaining single-gender organizations, they do not screen would-be members on the basis of race or religion.
Nonetheless, the administration has determined that they espouse values and ideas contrary to the Harvard spirit and must consequently be treated as an anachronistic wrong to be extirpated. In a statement issued in December, President Faust (along with William F. Lee, senior fellow of the Harvard Corporation) declared that
the final clubs in particular are a product of another era, a time when Harvard’s student body was all male, culturally homogenous, and overwhelmingly white and affluent. Our student body today is significantly different. We self-consciously seek to admit a class that is diverse on many dimensions, including on gender, race, and socioeconomic status.
The clubs have strict rules about speaking with the press, and every member I spoke with—both former and current students—did so on the condition of anonymity. Many brought up the topic of diversity, noting that in their experience, the members of their clubs were diverse in both ethnic and socioeconomic respects. Members of multiple clubs told me about policies under which an inability to pay club dues has no bearing on whether or not a student will be accepted. Indeed, one went so far as to note that the financial-aid offer is blatantly highlighted during the initiation process, so that those lower on the socioeconomic ladder are not even temporarily burdened by the misconception that their financial status might affect their membership.
The final clubs, like Harvard itself, may indeed be a product of another era. But just as Harvard has evolved, the final clubs have changed. Faust, Lee, and all of the actors in the anti-final-clubs camp, ignore this. They also espouse a position that is as illogical as it is incoherent: Faust and Lee claim both that “students may decide to join a USGSO and remain in good standing” and that “decisions often have consequences, as they do here in terms of students’ eligibility for decanal1 endorsements and leadership positions supported by institutional resources.”
Most parents would not believe that their sons and daughters were in “good standing” if they came home from campus for winter break and told them they would be unable to be editor of the newspaper, captain of the debate team, or eligible for a Rhodes or Marshall scholarship. Yet Faust and Lee insist that “the policy does not discipline or punish the students.” It merely “recognizes that students who serve as leaders of our community should exemplify the characteristics of non-discrimination and inclusivity that are so important to our campus.” It’s hard to believe that Faust and Lee might honestly think that excluding students from leadership roles or prestigious postgrad opportunities would be construed as anything other than a punishment.
So why the insistence to the contrary? If the final clubs are, in the administration’s eyes, archaic, narrow-minded, discriminatory organizations, why not come out with an honest statement that calls for disciplining the students who dare to participate in these institutions? Lewis, the former dean, has explained this by making reference to what Faust and Lee do not mention—namely, Harvard’s Statutes—the internal bylaws governing the institution. Lewis cites part of the 12th statute, which lays out that “the several faculties have authority…to inflict at their discretion, all proper means of discipline.” He notes that “by declaring that ineligibility for honors and distinctions are ‘not discipline,’ what President Faust and Mr. Lee are saying is that the Statutes are not implicated, the matter is not one for the Faculty to decide, and no Faculty vote is needed to carry out the policy.” Indeed, Lewis notes that “it is important that the…policy not be discipline, because if it were discipline, and disciplinary action were taken against a student without a Faculty vote authorizing that policy, that student could challenge the action as not properly authorized.”
There is something else the Faust-Lee statement does not reference—and tellingly. In the beginning of the Harvard administration’s war on final clubs, concerns over sexual assault seemed to form the core of the issue. The Task Force on Sexual Assault Prevention reported that 47 percent of female college seniors who were in some way involved in final clubs—either because they attend events at the male clubs, or because they themselves are members of female clubs—said they had experienced “nonconsensual sexual contact since entering college.” Since “31 percent of female Harvard seniors reported nonconsensual sexual contact since entering college,” the report said, the data proved that “a Harvard College woman is half again more likely to experience sexual assault if she is involved with a Club than the average female Harvard College senior.” But Harvard’s sexual assault survey also found that 75 percent of “incidents of nonconsensual complete and attempted penetration reported by Harvard College females” happened in…Harvard dorms.
The report is sloppy and lumps together things that are not alike. For example, the Porcellian—Harvard’s oldest final club—does not allow any nonmembers through its doors. Charles Storey, who was then the Porcellian’s graduate president, provided a statement to the Crimson in which, among other things, he claimed that the club was “being used as a scapegoat for the sexual assault problem at Harvard despite its policies to help avoid the potential for sexual assault.” The Porcellian, he said, was “mystified as to why the current administration feels that forcing our club to accept female members would reduce the incidence of sexual assault on campus.” Indeed, Storey said, “forcing single gender organizations to accept members of the opposite sex could potentially increase, not decrease the potential for sexual misconduct.”
A day later, Storey apologized for his statement. A few days after that, he resigned as the Porcellian’s graduate president. His reasoning was admittedly inelegant, as it could be interpreted to suggest that club members would be unable to restrain themselves from committing sexual assault should women enter their domain. But Storey was not incorrect in pointing out that, by definition, women could not be subjected to unwanted touching in the Porcellian clubhouse if they were not allowed inside. For a club like the Porcellian, then, where instances of male-on-female sexual assault within the house are currently nonexistent, going co-ed would inherently guarantee that the opportunity for assault would expand. And that is why it is noteworthy (Storey’s humiliation notwithstanding) that the Faust-Lee declaration eliminated the attack on the final clubs for their ostensibly heightened role in unwanted sexual conduct. And why the entirety of the case against them now rests on their failure to hew to the administration’s convictions on gender egalitarianism.
The role that final clubs play in Harvard social life has been a contentious topic for decades. The perception has long been that socially, the members of Harvard’s male final clubs have too much power. On a campus with limited space for social gathering, the final-club mansions are often the source of the college’s most sought-after nightlife. Arguments have been made consistently over time that the exclusionary practices of the clubs—they typically accept only 10 to 25 new members a year—make for unpleasant and unfair campus social dynamics. But again, this conversation is happening at Harvard, an institution that prides itself on its prestige and exclusivity, and which accepted a mere 5.2 percent of its applicants to the 2021 class.
Lewis, the former dean, is not exactly a natural ally for the clubs. He told me that he was “pretty tough with them” during his tenure, and that he was “instrumental in trying to get some of the bad behavior of some of the final clubs under control.” The issues that arose during his time as dean seem to have mostly been related to parties that grew too loud or students who became too drunk. But confronting specific problems as they arise is an approach entirely different from issuing an all-encompassing sanction on free association. At Harvard, specifically, the implications of such a policy could have long-term ramifications. “As an educational institution that, for better or worse, graduates more than its fair share of the leadership of the country, in both industry and technology, and government and law,” Lewis said, “we should not be teaching students that the way you control social problems is by creating bans and penalties against joining organizations.” His “bigger worry,” he said, is that “students will come to think it’s a reasonable thing to do.”
Beyond all these considerations lies an additional layer of complication: legality. Even as a private institution, Harvard’s autonomy may not be as absolute as it seems to believe. I spoke by phone with Harvey Silverglate, a lawyer who is currently representing the Fly, one of the clubs. He told me that “Harvard is misinformed if it has been told by its lawyers or by the office of the general counsel that it can do what it is trying to do, that is to say, punish a private off-campus club, punish Harvard students for joining a legal off-campus club, that is not on Harvard property, and over which the university has no control.” If Harvard goes forward with its plan, Silverglate noted, it will have “overstepped its legal powers.” He spoke extensively about the specific challenges that Harvard would face under Massachusetts state law, explaining that there are free-speech provisions in the Massachusetts constitution that are more protective of speech than the First Amendment to the U.S. Constitution. In fact, Silverglate noted, the state’s supreme court has ruled in several instances that Massachusetts’s declaration of rights “limits the power of private institutions over the people it governs.”
In its desire to avoid a lawsuit, the Harvard administration—or the team of lawyers that doubtlessly advised it—carefully crafted a rule that would apply equally to men and women. Had the sanctions applied solely to male-only clubs, the university would likely have been faced with a federal lawsuit or investigation into gender discrimination. Yet despite the male final clubs being the primary target of the sanctions, they seem to have done the most harm so far to Harvard’s fraternities, sororities, and female final clubs.
One female student I spoke with is a member of one of the originally all-female final clubs that has recently gone co-ed rather than face the sanctions. She explained that within the club, there is a “feeling of resentment.” The USGSOs were all given the choice to either go co-ed or face the sanctions. “The girls clubs,” she told me, “have accepted it because they don’t have a lot of money.” While the male clubs have old and powerful alumni—and the money that comes with them—the female clubs are young and, by comparison, poor. “The boys can all sue,” she said, but “the girls clubs don’t have that privilege.” Having men in the club has certainly changed things for her. She explained: “It’s definitely different—I loved having an all-female space, and there was lots of merit to that socially and even in terms of networking.… I had this strong female network, and that was kind of eroded by going co-ed.”
Sorority members are facing similar challenges, but unlike the male and female final clubs that do not answer to a national body, they are unable to adapt as they see fit. Sororities and fraternities are unable to go co-ed without violating the rules of their national charters; the sanctions policy therefore affects their organizations most.
I spoke by phone with Evan Ribot, a Harvard alumnus from the class of 2014 who was president of the fraternity AEPI while on campus. Stressing that he could speak only for himself, and not on behalf of AEPI or the AEPI alumni network, he told me there was a “tenuous relationship between the administration and the fraternities” when he was on campus. “There was a sense that we operated in a gray zone because the university knew we existed,” he told me. “So we weren’t underground, but we also were not a recognized group.” As a result of the sanctions, AEPI at Harvard has dissolved itself and become a new organization, the gender-neutral “Aleph.” The organization is no longer affiliated with AEPI national.
“It’s a shame,” he said, “because some of my best friends were looking to join AEPI not because they wanted to be in an exclusionary single-sex organization but because they were looking for a place to fit in on a challenging campus.” The same is true for women: Ribot noted “The sororities were an avenue for women to find their own spaces—not because they were looking to exclude men but because there is an inherent value to a group of women hanging out, just like there can be an inherent value to have men hanging out.… It’s not rooted in exclusion.”
In some circumstances, it appears, Faust agrees. She herself attended Bryn Mawr—a women’s college— and serves as a special representative on the board of trustees of her alma mater. “It is impossible to figure out how Faust can reconcile helping to provide that singular experience to women while at the same time denying any portion of that experience to the women she is responsible for at Harvard,” said Richard Porteus, graduate president of the Fly Club. He graduated from Harvard in 1978 and was elected a member of the Fly Club in 1976. He spoke of the diversity of his club class and reflected that while “there were some people whose names also appeared on Harvard buildings,” he “didn’t come from wealth” and was not only elected to the club but became an officer. Porteus explained that “one’s socioeconomic standing did not matter.” All that mattered, he said, was “the potential for forming life-long friendships.”
The debate over Harvard’s final clubs would have taken place in an entirely different framework if we were still living in a time when university administrators saw their role as fill-in parents—and if that role were viewed as a comfort by the parents themselves. But today’s universities are, for better or worse, largely a free-for-all. The curtailing of certain freedoms thus becomes all the more apparent, and all the more disturbing, when measured against the backdrop of a prevailing “you do you” attitude. The core of the administration’s position seems to be reinforced by an overwhelming need to groom a student body that shares all the same beliefs and values—those that echo the principles that the administration itself espouses. If it deems single-sex social groups discriminatory, then there is no room for those students who see them not as beacons of gender exclusivity but as opportunities for friendship and support. In an educational institution, the only kind of diversity that should matter is diversity of thought. That’s a lesson the Harvard administration desperately needs to learn.
Harvard’s own questionable record on diversity is currently under harsh scrutiny—and not because of the behavior of clubs that have a tenuous connection to the university’s educational mission. Research has demonstrated that to gain entry into an institution like Harvard, Asian-American applicants must score an average of 140 points higher on their SATs than white applicants, 270 points higher than Hispanic applicants, and an astonishing 450 points higher than African-American applicants. The Justice Department has taken note and is investigating the matter. In December, the New York Times reported that the university has agreed to give the DOJ access to applicant and student records. That Harvard’s administration has become consumed with the goal of bringing an end to institutions that fail to meet a 21st-century standard for diversity is not without its savage ironies.
1 Meaning something a dean does.
Choose your plan and pay nothing for six Weeks!
Review of 'In the Enemy’s House' By Howard Blum
Nearly a decade would pass until the FBI and NSA began to release the actual Venona transcripts in 1995. In the years since, a number of books (including several co-authored by me) have analyzed the Venona revelations, while others have mined Communist International files and the KGB archives. Virtually all the major mysteries about Soviet espionage in the United States have been resolved by these once-secret documents. In addition to confirming the guilt of the Rosenbergs, Alger Hiss, Harry Dexter White, and virtually every other person accused of spying in the 1940s by the ex-spies Whittaker Chambers and Elizabeth Bentley, these books have exposed several important and previously unknown agents such as Theodore Hall, Russell McNutt, and I.F. Stone. Indeed, the only accused spy who turns out to have been innocent (although he was a secret Communist almost up until the day he took charge of developing an atomic bomb) was J. Robert Oppenheimer.
A handful of espionage deniers, centered around the Nation magazine, continue to argue, against all evidence and logic, that Alger Hiss is still innocent. The Rosenberg children continue to distort their mother’s role in espionage. And some hard-core McCarthyites still demonize Oppenheimer. But in truth, the bloody battle over who spied is over.
Lamphere’s book emphasized his collaboration with the Army cryptographer Meredith Gardner in the hard work of unraveling the spy rings using the Venona cables. Employing those 1986 recollections as a template, the Vanity Fair contributor Howard Blum has now given us In the Enemy’s House, an overly dramatized but largely accurate account of the friendship between the outgoing, hard-driving, atypical G-man Lamphere and the shy, scholarly, soft-spoken Gardner as they worked together to find and prosecute those Americans who had betrayed their nation.
Blum intersperses the American hunt for spies with the recollections of Julius Rosenberg’s KGB controller, Alexander Feklisov, who ran Rosenberg in 1944 and 1945 and supervised Fuchs in Great Britain from 1947 to 1949. Feklisov watched with mounting dread as the KGB’s atomic spy networks were exposed, both because of Venona and the KGB’s own blunders—most notably because the Russians used Harry Gold, Fuch’s contact, to pick up espionage material from David Greenglass, who was Julius Rosenberg’s brother-in-law and part of his spy ring.
Blum also uses information from many of the scholarly accounts that have already appeared, although not always carefully. His only new source of data comes from interviews with members of the Lamphere and Gardner families and access to their personal notebooks. But while he provides a list of his sources for each chapter, Blum does not use footnotes, so that although many of the personal and emotional reactions to the investigation he attributes to people, and especially to Lamphere, presumably come from these sources, it is never clear whether they are based on contemporaneous written notes or third-party recollections of events more than 50 years in the past.
Such objections are not mere academic carping. While Blum successfully turns this oft-told story into an interesting and suspenseful narrative, his approach comes at a cost. For example: He is eager to transform Lamphere from a diligent and resourceful FBI investigator who often chafed at the bureaucracy and petty rules that governed the agency into a full-blown rebel who almost singlehandedly forced the FBI to take up the problem of Soviet espionage. To do so, Blum suggests that until the FBI received an anonymous letter in Russian in August 1943 alleging widespread spying and naming KGB operatives, the Bureau regarded the investigation of potential Soviet spies as useless because allies did not spy on each other.
This is wrong. In fact, the FBI had already mounted two large-scale investigations—one of Comintern activities in the United States undertaken in 1940 and the other of attempted espionage directed at atomic-bomb research at the Radiation Laboratory in Berkeley, which began in early 1943. Both had unearthed information on atomic espionage. These included discomfiting details about Robert Oppenheimer’s Communist connections; efforts by Steve Nelson, a CPUSA leader in the Bay Area in contact with known Soviet spies, to obtain atomic information; and contacts between a Soviet spy and Clarence Hiskey, a chemist on the Manhattan Project.
At one point, Blum renders one of Hiskey’s contacts, Zalmond Franklin, as Franklin Zelman and mischaracterizes him as “a KGB spook working under student cover.” In fact, Franklin was a veteran of the Abraham Lincoln Brigade working as a KGB courier. In any event, the FBI neutralized this threat by transferring Hiskey from Chicago to a military base near the Arctic Circle, thereby scaring his scientific contacts (whom he had introduced to a Soviet agent) into cooperating with the Bureau.
There are other occasions where Blum demonstrates an uncertain grasp of the history of Soviet intelligence. He misstates Elizabeth Bentley’s motives for defecting; angry at being pushed aside by the Soviets, she feared she was under FBI surveillance. And he claims that only three witnesses testified against the Rosenbergs (Ethel’s brother and sister-in-law and Harry Gold), which leaves off others (Bentley, Max Elitcher, and the photographer who had taken passport photos for the family just prior to their arrests).
Blum’s account of the way the KGB encoded and enciphered its messages is oversimplified. The mistake that made it possible for American counterintelligence to break into the Soviet messages was their intelligence services’ use of some one-use-only pads a second time. Not all of the one-time pads were used twice, and only if such a pad was used twice could the FBI strip the random numbers from the message sent by Western Union. That process allowed Gardner to attempt to break the underlying code. The vast majority of the Soviet cables remained unbreakable, and many could be only partially decrypted. And most of the decrypted cables had nothing to do with atomic espionage but concerned the stealing of diplomatic, political, industrial, and other military secrets.
Partly to heighten suspense, Blum misrepresents or distorts the timelines on matters involving Klaus Fuchs and the Rosenberg ring. He harps on Lamphere’s frustration about not being able to use the decrypts in court, but the FBI had concluded it was highly unlikely that they could be legally introduced into evidence without exposing valuable cryptological techniques, a conflict Lamphere surely understood. That very problem helps explain the FBI’s inability to prosecute Theodore Hall, the youngest physicist at Los Alamos, who had been exposed as a Soviet spy. Blum mistakenly suggests that the FBI agent in Chicago who investigated Hall was unaware of Venona. But that agent did know; the problem was that when the FBI began its investigation in the spring of 1950, Hall had temporarily ceased spying. He was eventually brought in for questioning, but neither he nor his one-time courier and friend, Saville Sax, broke and confessed. Lacking independent evidence, the FBI was stymied.T he most significant flaw of In the Enemy’s House is its assertion that Ethel Rosenberg’s conviction and execution were monumental acts of injustice that disillusioned both Lamphere and Gardner, soured their sense of accomplishment, and left them consumed by guilt. It is true that Lamphere had opposed Ethel’s execution and had drafted a memo that J. Edgar Hoover sent to the judge urging she be spared as the mother of two young sons. Gardner had translated one Venona message that indicated Ethel knew of her husband’s espionage but because of her delicate health “did not work,” which Gardner interpreted to mean she was not part of the spy ring. But, as Lamphere pointed out in his own book, her brother David Greenglass had testified to her involvement in his recruitment. And KGB messages available following the collapse of the Soviet Union now make clear that Ethel had played a key role in persuading her sister-in-law, Ruth Greenglass, to urge her husband to spy.
In The FBI-KGB War, Lamphere never evinced deep moral qualms about their fate. He expressed a more complex set of emotions. “I knew the Rosenbergs were guilty,” he writes, “but that did not lessen my sense of grim responsibility at their deaths.” And he calls claims that the case was a mockery of freedom and justice both “abominable and untruthful.” Blum insists that Gardner was “stunned” by their deaths and quotes him as saying somewhere: “I never wanted to get anyone in trouble” (which would suggest a monumental naiveté if true).
Blum’s claim that Lamphere and Gardner had condemned themselves “to another sort of death sentence” for their roles is a wild exaggeration. So, too, is his charge that Lamphere believed that in the Rosenberg case the United States “might prove to be as ruthless and vindictive as its enemies.”
Finally, Blum links Lamphere’s decision to leave the FBI for a high-level position in the Veteran’s Administration to a sense of lingering guilt. But in his own book, Lamphere attributes the move to the frustration he felt once he realized he would be stuck as a Soviet espionage supervisor for years to come. Blum links Gardner’s brief posting to Great Britain to work with its code-breaking agency as an effort to escape his guilt, but he never mentions that Gardner returned to work at the National Security Agency for many years.
Retired intelligence agents friendly with both men have no recollection of their expressing regret about their role in the Rosenberg case. It is possible that they may have made some such comment to a family member or jotted down something in a notebook, but without very specific and sourced comments, the idea that they ever regretted their work exposing Soviet spies is nonsense that mars Blum’s otherwise entertaining account.
Choose your plan and pay nothing for six Weeks!
What we got instead was a combination of celebrity puffery and partisan cheap shots at the Trump administration. The politics of North and South Korea, and the equally complex and intricate relations between these two countries and China, Japan, Russia, and the United States, were reduced to just another amateur sport. Ignorant and supercilious reporters transposed the clichés of the electoral horse race, complete with winners, losers, buzz, and sick burns, to nuclear brinkmanship. Major news organizations could not have done Kim’s job any better for her.
A representative example was written by no less than seven CNN reporters and researchers who concluded, “Kim Jong Un’s sister is stealing the show at the Winter Olympics.” The lead of this news article—I repeat, news article—was the following: “If ‘diplomatic dance’ were an event at the Winter Olympics, Kim Jong Un’s younger sister would be favored to win gold.” Gag me.
Then the authors let loose this howler: “Seen by some as her brother’s answer to American first daughter Ivanka Trump, Kim, 30, is not only a powerful member of Kim Jong Un’s kitchen cabinet but also a foil to the perception of North Korea as antiquated and militaristic.” Kim’s “Kitchen Cabinet”—why, he’s just like Andrew Jackson. And how could anyone have the “perception” that North Korea is “antiquated” and “militaristic”? Sure, they might threaten the world with nuclear annihilation. But have you seen Donald Trump’s latest tweet?
New York Times reporters are either smarter or more efficient than their peers at CNN, because it took only two of them to write “Kim Jong-Un’s sister turns on the charm, taking Pence’s spotlight.” Motoko Rich and Choe Sang-Hun described Kim’s “sphinx-like smile” and “no-nonsense hairstyle and dress, her low-key makeup, and the sprinkle of freckles on her cheeks.” They contrasted the “old message” of Vice President Pence, who has no freckles, with Kim’s “messages of reconciliation.” They cited one Mintaro Oba, a “former diplomat at the State Department specializing in the Koreas, who now works as a speechwriter in Washington.” What they did not mention is that Oba worked at Barack Obama’s State Department and writes speeches for a Democratic firm. Not that he has an axe to grind or anything.
The typical Kim puff piece began with her charm, grace, poise, statesmanship, and desire for unity and peace. Then, 10 paragraphs later, the journalist would mention that oh, by the way, North Korea is a totalitarian hellscape that Kim’s family has been plundering for over half a century. For instance, describing the South Korean reaction to Kim, Anna Fifield of the Washington Post wrote,
They marveled at her barely-there makeup and her lack of bling. They commented on her plain black outfits and simple purse. They noted the flower-shaped clip that kept her hair back in a no-nonsense style. Here she was, a political princess, but the North Korean “first sister” had none of the hallmarks of power and wealth that Koreans south of the divide have come to expect.
A political princess! It’s like Enchanted, except with gulags and famine.
Deep in Fifield’s article, however, we come across this sentence: “Certainly, Kim, who is under U.S. sanctions for human rights abuses related to her role in censoring information, was treated like royalty during her visit.” Just thinking out loud here, but maybe human-rights abuses and censorship deserve more than a glancing reference in a subordinate clause. Fifield went on to say that “Vice President Pence, who was also in South Korea for the opening of the Winter Olympics but studiously avoided Kim, had worried in advance that North Korea would ‘hijack’ the Olympic Games with its ‘propaganda.’” Now where could he have gotten that idea?
The fascination with Kim revealed both the superficiality and condescension of much of our press. Fifield’s colleague, national correspondent Philip Bump, tweeted out (and later deleted) a photo of Kim sitting behind Pence at the opening ceremonies with the comment, “Kim Jong Un’s sister with deadly side-eye at Pence,” as if he were being snarky about an episode of Real Housewives.
When Kim departed the Olympics, Christine Kim of Reuters wrote an article headlined, “Head held high, Kim’s sister returns to North Korea.” Here’s how it began:
A prim, young woman with a high forehead and hair half-swept back quietly gazes at the throngs of people pushing for a glimpse of her, a faint smile on her lips and eyelids low as four bodyguards jostle around her.
The Reuters piece ends this way: “Her big smiles and relaxed manner left a largely positive impression on the South Korean public. But her sometimes aloof expression and high-tilted chin also spoke of someone who sees herself ‘of royalty’ and ‘above anyone else,’ leadership experts and some critics said.” Thank goodness for the experts.
Kim Jong Un could not have anticipated more glowing coverage for his sister, for the robot-like cheerleaders he sent alongside her, or for his transparent attempt to drive a wedge between South Korea and its democratic allies. “North Korea has emerged as the early favorite to grab one of the Winter Olympics’ most important medals: the diplomatic gold,” wrote Soyoung Kim and James Pearson of Reuters, who called Pence “one of the loneliest figures at the opening event.” Quoting on background “a senior diplomatic source close to North Korea,” Will Ripley of CNN wrote an article headlined, “Pence’s Olympic trip a ‘missed opportunity’ for North Korea diplomacy.” But who was Ripley’s source? Dennis Rodman?
What most disturbed me was the difference in coverage of Kim Yo Jong and Fred Warmbier, whose son Otto died last year after being tortured and held captive in North Korea. Fred Warmbier accompanied Pence to the Olympics as a reminder of the North’s inhumanity and menace. Journalists ignored, dismissed, and even criticized this grieving man. Among many examples of thoughtlessness and callousness was a Politico tweet that read: “Fred Warmbier criticizes North Korean Olympic spirit.” He must have missed Kim’s freckles.
Washington Post columnist Christine Emba asked: “Is Otto Warmbier a symbol, or a prop?” You see, Emba wrote, “Otto’s father may want his son to be a symbol. But the nature of his escort risks turning him into a prop.” Why? Well, because “symbols stand for something” while “props are used by someone.” And “the Trump administration, which hosted Warmbier, is made up of shameless instrumentalizers who have made clear that they stand for very little.” So there you go. We should be skeptical of Fred Warmbier because Trump.
Emba’s not all wrong. There were a lot of props and tools at the Olympics. You could find them in the press box.
Choose your plan and pay nothing for six Weeks!
was nine when I made my first trip to Israel in June of 1968, almost exactly a year after the Six-Day War. My parents had been in Italy the autumn before, and while vacationing in Rome they learned that there were inexpensive flights leaving twice a week for Tel Aviv. The whole of Israel was giddy at the time, unburdened by their insecurities for the moment with the stunning success of their having just won the Six-Day War and their having increased the total size of their young, besieged nation by more than two-thirds.
My mother finally found a use for the crumpled phone numbers of distant Israeli relatives she’d been carrying in her purse for the past several months, relatives on both her father’s and her mother’s side, Romanians all. Osnat, my mother’s second cousin once removed, had had the misfortune of remaining in Europe while the Nazis were on the move. She spoke of having spent five days hiding from the Germans in the liquid filth of an outhouse and breathing through a tube when they came near.
Meeting scores of warm and loving relatives and having been feted by them as “our dear American Mishpacha” was partly why my parents were both so taken with Israel—that and the Israeli people themselves, the Sabras, so proud and brash, and the ancient beauty of the land. With some talk of perhaps making Aliyah, or at least exploring the idea of our moving to Israel, my parents, my siblings, my first cousins, and my Grandma Rose and her younger brother, Uncle Sol, gathered up a month’s worth of warm-weather clothing and flew en masse to Tel Aviv. We were greeted at Lod Airport by a crush of relations, all of them clambering to hug and kiss us. And then as the sun descended into the Mediterranean and night fell over the coastal plain, they drove us all north in a rag-tag caravan of tiny old Fiats, Renaults, and Peugeots to the beach town of Netanya, where we stayed for the entire summer in a tiny flat just behind the home Osnat shared with her husband, Shlomo.
Days later, I’m with my father and my brother Paul at the Wailing Wall. It’s weird to think that only a week ago I was at home watching Gilligan’s Island and looking for my dad’s Japanese Playboys in the bottom drawer of his bedroom closet during the commercials. Now, I’m in Jerusalem, in the glaring sun beneath this gigantic wall of stone. When I’m sure no one’s looking, I put both hands on the wall, and then I touch my forehead to it. The stones are colder than you’d think they’d be in all this heat.
For reasons I don’t understand, I start to cry. I’d be embarrassed if my brother or my dad saw me like this, so I pretend that I’m praying. I wonder, though, am I just crying because you’re supposed to cry here? If the rabbis from the Talmud Torah had shown me pictures of some random bridge in Saint Paul from the time I was in nursery school, would I have cried at that, too?
When I look up at the wall again, I see some birds’ nests and a million pieces of paper with people’s prayers in them, all stuffed into the cracks between the stones. Everyone who comes here wants God’s attention. I’ll bet He loves all the notes. They probably make Him feel like someone gives a shit about the cool stuff He does.I
had been born a Jew in Minneapolis. Growing up Jewish there wasn’t a good or a bad thing any more than growing up with snow was good or bad. It just was. Because we Jews were so few, being one made us all feel different. It wasn’t a difference we’d asked for or earned either. It, too, just was. It was natural for us, that is, becoming somewhat Jew-centric. We were fond of staying close to one another, close to our causes and to our history, it was just a natural reaction to being the “other.”
It’s 1970 and I’m in junior high, on my way to English, when I see Nelson Gomez, Stuey Nyberg, and Craig Walner. They’re hip-checking kids into the tall metal lockers that line the hall. They are the three kings of the Westwood Junior High’s dirtball dynasty, young hoodlums who regularly and without fear skip school, smoke filter-less Marlboros, and shout “Fuck you, faggot” to students and staff members alike, save perhaps for Mr. H, the anti-Semitic shop teacher with whom they have forged an abiding friendship.
To the left and right of me, hapless students fly, body-slammed with alarming speed into the lockers by the three of them. It doesn’t escape my notice that these unfortunates have not been chosen randomly. There goes Brian Resnick. Next it’s Shelly Abramovitz and then Alvin Fishbein. As I round the corner, Stuey Nyberg grabs my second cousin, Elaine Kamel, by the shoulders and slams her face-first into her own locker. She and they were selected for no other reason than their Jewishness.
I grab Stuey by his neck with both hands and I claw at him until my fingernails pierce his pale skin and blood spurts from his jugular. Now I take the clear plastic aquarium algae scraper that I made in Mr. H’s shop class this very morning and use it to gouge out one of Nelson Gomez’s eyeballs, making sure he can see it in the palm of my hand with his remaining eye. Craig Walner tries to run, but I catch him by his mullet and shove his head into Elaine Kamel’s locker. I slam her locker door on him again and again. I don’t stop until his head is severed from his neck…
…and my daydream comes to an abrupt halt when Stuey Nyberg says, “Himmelman, it’s your turn to meet the lockers, you fucking kike.” Without a word of warning, he clouts me with a stinging jab right to my nose. It’s the first time I’ve ever been hit in the face, and while it’s agonizing, the blow is also somehow euphoric. I’m supercharged with adrenaline, I feel as if I’m on fire. But of course, I don’t hit Stuey back. God, no. I simply stand there glowering at the three of them, blood dripping from my large Jewish nose. And for the first time in my life, I feel downright heroic. I look around me and I see that, for now at least, our bitterest enemies have stopped hip-checking what feels like the entire Jewish nation.
Six months later it’s summer vacation, and we Himmelmans fly from Minneapolis to New York and connect with a nonstop to Tel Aviv. In less than two days, I’m on a towel on the beach in Netanya looking out at the cerulean blue of the Mediterranean.
As I lay on the hot sand, Mirage fighter jets with blue Jewish stars emblazoned under their wings suddenly streak so low across the water that I can smell jet fuel. As they scream overhead, the whole beach seems to shake. With a strange sense of clannish pride, I laugh and stare up at the planes as they accelerate and finally rocket out of range.
My father died, after suffering from Stage IV lymphoma for five years, in 1984. I was 25 years old. A year later, I was living in the Twin Cities working on music with my band when I received a call from a woman named Ruth Grosh. She asked if I’d be willing to write some songs for a therapeutic teddy bear she’d dreamed up called Spinoza Bear. Ruth, a bona fide subversive by nature and New Age before anyone had even come up with the term, named her ursine brainchild after Baruch Spinoza, the heretical 17th-century Jewish philosopher. Spinoza was seen as harmful to, and at odds with, the views of the Jewish establishment of Amsterdam at the time. Eventually, both he and his writings were placed under a religious ban called a “cherem” by the Dutch Jewish community where he lived and worked. Aside from the fact that he was reviled for his modernist views, no one had much bad to say about him personally, except that “he was fond of watching spiders chase flies.”
The songs were to play off a battery-operated tape deck that fit into a zippered pouch beneath the soft brown fur of the bear’s stomach. A red heart-shaped knob on the bear’s chest served as the on-off switch. By today’s standards, the technology would seem crude, but at the time, with just a modicum of suspension of disbelief, it was possible to feel that the voice of the bear along with the music was issuing directly from its cheery muzzle. As to whom to hire to be the voice of Spinoza Bear, it was decided after some deliberation that not only would I write and sing the songs, I should also be the kind, concerned voice of the bear itself.
Each of the dozen or so cassette tapes that were eventually recorded had themes of self-empowerment, a kind of you-can-make-it-if-you-try bent. After just two years, the bear became a huge success—not as some plebeian, retail teddy, but as something greater. Spinoza Bear soon found his way into hospitals, health clinics, and centers for healing of all kinds. By holding the bear and listening closely to his stories and songs of wellness and inner light, rape victims, grief-stricken parents, bone-lonely pensioners, autistic kids, as well as children on cancer wards all across America found it possible to relieve some of their pain and fear.
Aside from the good works, the bear provided me with twenty grand in seed money that our band, Sussman Lawrence, used to set sail for New York City in 1985.
We were five new-wave rockers in an Oldsmobile Regal Vista Cruiser wagon, and two roadies in a spanking-new Dodge cube van. The van, which we were overjoyed to discover, had been hastily christened from bumper to bumper with graffiti sometime during our 45-minute debut set at CBGBs, the legendary East Village rock-and-roll club, only days after arriving on the East Coast.
Given the high cost of living in New York City, New Jersey seemed the next best thing. As it turned out, there were very few homeowners interested in renting a house to a band. I hatched a plan, which involved my calling on a middle-aged real-estate agent named Carol we’d found advertising in a Bergen County newspaper. When I finally got her on the line, I explained to her that we were medical students enrolled that fall at nearby Rutgers University and in need of a quiet place to live and study.
The following morning, as the rest of the guys waited outside in the Oldsmobile, I and my cousin Jeff, our band’s gifted keyboard player, showed up at Carol’s office in suits and ties we’d purchased at a local thrift shop and carrying responsible-looking briefcases. I had boned up on some medical terms as well, orthopedic surgical techniques mostly, in case she needed proof that we were actually who we were claiming to be. But there had been no need. We had the cash and seemed honest enough—“honest enough” to let her know that a few of us were also part-time musicians and that there might be some music playing, quietly of course, from time to time, just to ease the strain of our intense studies.
Two days later, Jeff and I woke up early, signed the lease papers, and pulled our now multihued, invective-laden cube van into the driveway of 133 Busteed Drive in Midland Park, New Jersey.
Trying for as much discretion as possible, lest the neighbors notice anything out of the ordinary, we backed the van up to the garage, lugged the gear up a short flight of stairs and into a large, unfurnished living room. Once upstairs, we began unloading beer-stained amplifiers, at least a dozen guitar cases, a drum set packed tightly into three large metal flight cases, assorted keyboards, and an entire public-address system and lighting rig. Aside from some bad scrapes in the hardwood floor and a gaping hole or two in the walls on our way in, the load-in was accomplished with speed and efficiency. We were up and practicing by late afternoon, our new-wave rock blaring fast and loud into the New Jersey autumn night.
A month after settling in, Ruth Grosh reached me at dinnertime by long distance, in the squalor of our band-house collective. After some catching up, she gently let me know me that some psychic friends had explained to her that I had just a few months left on the planet. “What!” I said, “they told you I was gonna die?” Ruth was practiced at this kind of thing, it seemed, although her nonchalance about my imminent demise didn’t make me feel any less concerned. “They asked me to find out if you’d like to come in for a free consultation,” she said. I was due to fly back to Minneapolis later that week anyway, and I figured I might as well find out what all this planet-leaving nonsense was about.
Back home, on the morning of my appointment with the psychics, I found my mother, who was normally quite composed, flitting around the kitchen and singing quietly to herself. She had agreed to a lunch date that afternoon with the contra bass player from the Minnesota symphony, her first since my dad had died almost two years before.
“Does this blouse look good on me?” she asked. “Be honest.”
“Yeah, it looks great,” I said.
I was uncomfortable in the extreme watching my mother dart around the house like a schoolgirl primping for a date with some dude who wasn’t my dad. True, it’d been two years since he’d died, and given all that she’d been through, it wasn’t like she didn’t deserve to live a little. After all, I thought, it was just lunch. But the more I saw of this weird, giddy side of her, the less I liked it. A car honked. It was Ruth.
She and I rode wordlessly as Japanese New Age wooden flutes intoned from her car stereo. We arrived after twenty minutes at the northern suburb of Brooklyn Center, and Ruth parked her car near a long row of newly built town houses. A man and a woman in their mid-forties greeted us at the front door, both smiling in a scary, off-putting way. They appeared to be a kind of husband-and-wife psychic tag team, and they rushed headlong into the consultation by asking if I’d like to give them some names of people I knew.
“We’ll be able to tell you all about them,” the woman said and smiled again. I thought it was just some cheesy method of showing off.
“The first names are enough,” said the man.
“Okay, let’s go with Jeff,” I said.
My cousin Jeff is a musical genius, a pianist of remarkable facility, who’s had to contend with neuromuscular tics most of his life. The two psychics were seated facing each other in cheap leather armchairs. In an instant, they were both precisely mimicking my cousin’s facial tics. I recognized each of them from the names Jeff and I had given them. When Jeff’s thumbs bent downward spasmodically, we called it “Southerner.” When his palms flexed upward in a sort of hand-waving motion, we called it “Reckless Greeter.” In another, with his eyebrows pinched together, lips compressed, and eyes blinking, Jeff looked like someone who was very curious about his environment. We called that one “Curious Man.” His most frequent tic was also his most unsettling. We called that one “Round the World.” It involved his eyeballs rolling uncontrollably in their sockets. Suddenly, to my astonishment, the corners of both of the psychics’ mouths had formed narrow half smiles. Their eyebrows began squeezing together; their eyes were blinking—open-shut-open-shut—perfectly mimicking Jeff’s Curious Man.
“The music, he can’t stop the music,” the woman shouted in excitement. Her husband, whose hands then began a remarkable imitation of Reckless Greeter added, “Yes, good God, the music! Can’t you feel it just pouring out of him?”
I was thinking this had to be some kind of brilliant trick, albeit a devilish one. It was astonishing, yes, but I wasn’t yet convinced that they were real. Next, I said the name “Beverly,” my mother’s, and they both giggled. It’s disconcerting to see adults giggle at any time, but when a pair of middle-aged psychics giggle at the mention of your bereaved mother’s name, it’s triply so.
“She’s doing something she feels guilty about,” the woman offered.
“Yes,” said the man. “Something she’s afraid of doing, but it seems to us that she’s also very excited.”
Almost in unison, the psychics said, “She’s acting like a little schoolgirl today!”
How in hell could they have known what I’d just experienced myself for the first time in my life that very morning? If these two freaks had wanted my undivided attention, they sure as hell had it now.
The room fell silent. I didn’t dare speak. They had officially scared the living daylights out of me with their last trick. Soon, they broached the subject I’d come all this way to talk about.
“Is it your wish to leave the planet?” the woman asked, more casually than I would have imagined possible for someone questioning a fellow human being about whether he wanted to live or die.
I paused and breathed deeply for a minute or so. It was a question I stopped and thought about longer than a mentally stable person might have.
“No,” I finally told them, “I have no intention of leaving anytime soon.”
This seemed to relieve them. The man said, “The reason we’ve been so concerned about you is that we believe music is more important to you than you may be aware. It represents your very essence, and by working as single-mindedly as you have to get a record deal, with the kind of music you’ve been making with your band, you’ve been cheapening and compromising your integrity. You’ve been, in a sense, unfaithful to your muse. That’s what’s causing this spiritual disconnect and, should it continue, my wife and I both feel like it will shorten your stay here.”
His wife took over: “What you need to do is uncover a deeper, more honest expression in your music, something closer to the bone. We know you love the blues and reggae. We think it’ll be helpful to start playing music you love, rather than music you think will sell.”
By this time, tears were spilling down my cheeks. “There’s this song,” I began telling them, “that I wrote for my dad over two years ago on Father’s Day, that almost no one has heard. It’s something that was written with the sole intention of connecting with him before he died. It’s on a cassette tape, just sitting there on a shelf in my closet.”
“Why not put that song out as your next single,” the man said.
I was suddenly speechless. Why had I never thought of this? It was such a simple yet profound idea. I flew back to New Jersey, determined to release not just the one song, but an entire album dedicated to my father.
The guys picked me up in the Oldsmobile at Newark Airport the next day. We were standing around the luggage carousel waiting for my bags when I told them I was going to record a solo record, a tribute to my father, whom they all loved and respected.
My bandmates understood this was something I needed to do. They also knew it wasn’t just talk. A solo album, produced for whatever reasons, also signaled the possibility that the ethos of the band may well have been coming to an end. Nevertheless, they played their hearts out on the record and, by doing so, tacitly gave me their blessings and their assurances that whatever happened with it would be for the best.
The recording featured the song I’d written for my dad, and it eventually became my debut album, This Father’s Day, for Island Records.
Its release also became a powerful catalyst for me personally. It took me from where I had been, locked up in pain and confusion, to some other, more hopeful place. Even before my meeting with the psychics, I thought I’d gotten beyond most of the hurt, that it was simply time to grit my teeth and persevere. It had been two years, after all. But I was mistaken. The process of mending broken hearts is never as pat as that. As much as I needed to forget, to emerge clear-eyed from the jumble and rawness of my father’s death, I knew I’d have to face my worst fears again and again. But I felt ready. I also knew, in a way I hadn’t before, that I really didn’t want to die.
While my father was suffering in the last five years of his life, I found myself in a different state of mind from that of my friends and bandmates, who were, for the most part, blithely moving through their young lives. I’m not saying pain made me wise; it’s just that it can, for those willing to accept its hard lessons, provide a bit of perspective, shine some light on what’s sacred and what’s less so.
During those years I was working very hard to become famous, whatever that might have meant. I felt that I needed to reach some level of achievement before my dad died. I suppose I was conducting a search for miracles. It’s no wonder. For my family and for me at least, miracles seemed to have been in very short supply back then.
It’s miracles after all, that compel us forward, that encourage us to move with some degree of willingness into the next day. But, despite what we might believe, it’s hardly ever the big ones that truly move us. The sea can split, we can win the lottery, we can even become rock stars, and still, those phenomenal circumstances are never what matter most. In the end, the only miracle worth wishing for is the ability to be made aware of the smallest splendors, the most inconsequential truths, and the overlooked rhythms that connect us to the people and things we love.
I felt a kind of heat rising up around me in those days, a sense that what had long been static was now stuttering back into motion. There was a pleasant strangeness to the feeling, but like many things that at first strike us as unusual, it wasn’t wholly unfamiliar, either. I’d felt that same unnamable sensation, lying awake in my bed in the dark as a young child, focusing on individual moonlit snowflakes as they fell outside my window. I felt it again in Jerusalem, at nine years old, when I first touched the sunbaked stones of the Western Wall. I felt it the first time I’d snorkeled in the Red Sea and became drunk from sheer beauty. I felt it the frigid November morning we buried my father. I felt it on the evening I finally met my wife, and again, the moment when each of my children was born.
The circumstances were wildly varying, but in each instance there was a sense of being taken from one place to another, of inertia finally giving way to movement. It was as if my mundane life had cracked open and I saw, arrayed in front of me, some image of the unseen hand that forms and directs the universe.M
y first experiences in Crown Heights, Brooklyn, at age 27 were catalytic. A rabbi named Simon Jacobson had posed a single question and it, too, set me into motion: “Why is walking on the surface of the Earth any less miraculous than flying above it?” he’d asked.
The idea that the world is a wondrous, mysterious place—even as we are destined to walk on the mundane surface of it, even if we cannot truly fly—is both a liberating and comforting notion. Being attuned to wonder is my preferred condition. Perhaps it’s natural for each of us. But why, then, are so many moments not imbued with this sense of the miraculous? Why is there such a divide between barely sensing and deeply feeling?
What I did know in the autumn of 1987, with a certainty I hadn’t known before—perhaps couldn’t have known—was that I needed to get married. I had awakened to the idea that there was nothing I was doing with my life, not my music, not my friendships, not my finally getting that almighty record deal, more important than finding the right woman with whom to create a family and live out my days. I also knew that to do this, I would need to create a powerful forcing frame for myself, not one that would constrict or limit me, but one that would allow me to channel my outsized ego and my creative proclivities toward more productive ends than I’d ever dreamed possible.
Eventually, I made a sort of pact with myself, a silent, personal agreement. It came down to this simple declaration: The next time I sleep with a woman, it will be with my wife. This meant that I had to extricate myself from my longtime girlfriend. Though I was, and still am, extremely fond of her, I could never envision her as a lifetime partner or the mother of my children. In addition, our arrangement was somewhat nebulous, and so this new, self-imposed structure also meant that I’d have to cut off any contact with the other women with whom I was having casual sex. I had to make a fundamental cultural and emotional shift. I would need to wean myself away from years of assumptions about the very nature of what a modern relationship meant. I would have to forge a new way of looking at women, at my role as a man, and at the world at large.
It became clear to me that the freedom I had always longed for could be obtained only through the somewhat paradoxical means of setting limits, delaying gratification, and cutting away many experiences that an all-pervasive consumerist culture had been (and continues to be) hell-bent on selling. If you’ll allow me, I’ll explain this further by way of metaphor.
Music is among the most transcendent of all art forms, both for the performer and listener. Since it has no form or substance, it can easily serve as a model for the boundlessness of spirituality. But as anyone who has mastered a musical instrument knows, musical ideas are expressed almost exclusively by means of structure and restriction, words very few of us would correlate with freedom.
At first glance, this seems like a paradox. How could something as liberating and intangible as music be based on restriction? Not only is music based on restriction, I’d go so far as to say that, aside from the existence of raw sound—elemental white noise, if you will—the only other thing that allows music to take place, the only thing that differentiates it from this pure noise, is what sounds the musician chooses to leave behind. In this sense, music comes about not by choosing notes but by the elimination of notes. Take a look at the idea in this somewhat inverse manner: Only by rejecting all other sonic choices are we left with the ones we truly desire. To make music, we don’t add, we subtract.
Here’s how something as commonplace as the key signature of a particular piece of music also reflects this idea. Unless you were trying to achieve a harsh atonal musical effect, you wouldn’t want to be playing in the key of B-flat minor while your key signature called for you to be playing in A major. The ensuing “music” would sound like a chaotic racket to most people. The time signatures of compositions, along with their tempos, which require that a particular note last only so long and that it be played at a particular speed, also function with this same principle—creation by negation. Avoiding the time signature, or playing at any speed without regard for the overall tempo, is another good way to produce only noise.
It is only through adherence to the limiting factors of time and tempo that music can take shape. In that same sense, if it weren’t for the constraint of playing only certain keys on a piano, and thereby negating all other choices, you would hear only noise. Anyone who has heard his or her toddler pounding away on a piano knows exactly what this sounds like.
Most, if not all, musical instruments also work on this principle of restriction. The trumpet, for example, is based upon compression and restriction. If the air a player blows into the trumpet’s mouthpiece weren’t compressed and regulated by the embouchure, the only sound you’d be able to hear would be a soft wind-like noise passing through the horn.
As I became more and more immersed in the wisdom of Jewish thought and practice, the idea of freedom-in-structure became clearer and ever more personally relevant. If it was true for music I wondered, how much more true must it be for all of life itself? And given that human sexuality (whether or not the participants engaged in a sexual act are conscious of it) concerns the creation of life, it occurred to me that causing dissonance in that most meaningful—dare I say mystical—arena of life was something I definitely needed to avoid.
I knew I had to place a set of restrictions on myself in order to make music out of my life, as opposed to just raw sound. Although this conception of the universe felt new to me, new in the sense that it was radically different from the one I’d been acting on for so many years, it wasn’t unfamiliar. Without my knowing it, I had undergone an awakening. I became alert to a perspective I recalled vaguely, even from my earliest childhood. It was as if I could see something important forming (though what it was, was still unclear) out of a barely examined and often fleeting sliver of thought. All at once, the world around me seemed to feel very much as it did when I was a child. I could remember clearly, lying feverish in bed, waiting for sleep, with every last thing in the world unknown and unexplained.
It was frightening as an adult to feel these thoughts growing stronger and more pervasive, but it also felt safe in ways—as though there’d been a kind of revelation, one that seemed to say: “Peter, son of David, there is a purpose to everything you’ve experienced in the recent past and everything you see before you now. From this moment on, there are things you must do and ways you must act.”
The mantra to live without restrictions, which had guided me for most of my life, seemed at that point to be leading me only to chaos. I believed I could, and must, do better for myself. My most fervent wish was no longer to become a rock star; it was to create my own family, one that could become a replacement for the one I’d been missing, the one that had changed so drastically when my father died.
So, in a tour bus rolling across the American continent, I did the three most practical things I could think of: I stuck to my private pact, I dreamed, and I prayed several times a day to an unseen Deity for strength and for love.
This part of the story really begins a few months after my dad’s funeral, when I found myself in a cramped apartment in South Minneapolis auditioning some songs I’d written for a local performer named Doug Maynard. I sang him a few things and he nodded quietly. Doug wasn’t a big talker. Finally he chose one. “Man, I think I could do this justice,” he said. It was called “My First Mistake.”
You taste like pepper frosting on a granite cake.
Baby fallin’ in love with you was my first mistake…
Less than a year later, Doug was found dead in his living room, stone-drunk and drowned on his own vomit at the age of forty. Before this happened, however, he had introduced me to his manager, who had introduced me to a New York City music lawyer, who had introduced me to a record producer named Kenny Vance.
Kenny had worked with a lot of famous people and he wasn’t particularly shy about mentioning just whom. “I used to date Diane Keaton,” he told me. “I know Woody Allen—been in a couple of his films. I was the music director for Saturday Night Live.” Then he said, “Tonight I’m gonna take you to my main connection, a religious Jew in Brooklyn.”
Before long, Kenny and I were crossing the Brooklyn Bridge. We arrived at an apartment in Crown Heights where Kenny’s friend, Simon Jacobson, greeted us. I liked Simon right off the bat. His eyes reflected some essential paradox, some awareness that being alive is both a source of great humor and great sadness. His wife, Shaindy, introduced herself with a gracious smile and placed glass bowls of almonds and chocolate-covered coffee beans on a yacht-sized table before excusing herself to tend to her young children. The thing I didn’t understand at first was how a big hirsute guy like Simon, in an oversize yarmulke, with a massive beard and in a white polyester button-up, was able to land such a good-looking wife. I soon learned that around these parts, it wasn’t the guy who could throw a football the farthest who got the girl. Simon had another thing going for him.
His, at the time, was to memorize every word of the Lubavitcher Rebbe’s Shabbos dissertations and record them on Saturday night for publication later in the week. To understand the scope of the job, it’s necessary to know that when the Rebbe spoke, it was often for four or more hours straight, without breaks, without notes, and in a manner of cyclical and increasing complexity. To make things even more challenging, the Rebbe wasn’t freestyling. Everything he taught was derived from a compendium of source materials that ranged into the tens of thousands of books. And they could not be recorded because it was the Sabbath and no electricity could be used.
When I once mentioned to Simon how awed I was at his ability to memorize this much information, he looked at me and said: “The memorization is the least of it. It’s the task of compiling it with the proper source notes that’s the real challenge. Every day I correspond with the Rebbe, and he writes me back with perfect editor’s notes. Once I wrote and said I didn’t understand a particular passage and couldn’t find the source for it. The Rebbe had a sharp sense of humor. He sent me back a markup with a big red circle, not just on the sentence I was having an issue with, but around the whole page, with the words, ‘What do you understand?’”
It was getting late. Kenny had left me there and driven back to the city. As Simon spoke to me, I kept looking up at the oil paintings of shtetl life and the Rebbe hanging on the walls. I was prodded more by fatigue than bravado when I finally asked, “What’s the deal with those pictures of the Rebbe? They seem sort of cultish to me.”
“I like the pictures,” he said, “To me, the Rebbe is like a very inspiring grandfather, and I get a lot out of reflecting on the things he says and the way he lives his life. There are people for whom there is no sense of self. People called Tzadikim, and they have no need for personal gain. A Tzadik lives only to serve others and they can do anything they wish.”
“Really,” I asked with just a hint of comic disdain. “Can they fly?”
“Understand, I’ve never seen anyone fly,” Simon answered. “But for a Tzadik, the act of flying is no greater miracle than the act of walking.”
This idea stunned me. Not because it was new. The things that move us most never are. They are things we already know, beliefs that are buried away inside us. Of course, when you stop and think about it, there’s absolutely no difference between the weights of the two miracles, walking and flight. It’s just that we non-Tzadikim get so tired of the one that happens all the time.
At that moment, at that table in Brooklyn, I started thinking about the little-known rhythm-and-blues singer Doug Maynard. I was remembering the sound of his voice and simultaneously considering the infinite number, the impossible number, of tiny coincidences—the tendrils, if you will, that in their unfathomable complexity, had guided me to that particular apartment on that particular night. The thought was so vivid, it was as if I could hear Doug singing again. Singing most soulfully, most truthfully about the joy, and the sweat, and the pain of this world. It wasn’t long after that I met the Lubavitcher Rebbe for the first time. He handed me a bottle of vodka and a blessing for success, and I started becoming more Jewishly observant right away: keeping Shabbos in my tiny apartment in Hell’s Kitchen, keeping kosher, and putting on tefillin. I married Maria two years later. We’ve been married for nearly 30 years.
About a year ago my cousin Jeff asked me what it had been like to meet the Rebbe. This is exactly how I answered him.
“You know when you’ve done something you think is horrible (whatever the hell it may be) and you start going down—deeper and deeper into the rabbit hole of regret? When you’re in so deep that you start to feel like the biggest loser ever born, like nothing is possible, that nothing good is ever gonna come your way, and that you can’t even face yourself in the mirror?”
“Sure,” Jeff said. “I’ve been there.”
“Well,” I said, “meeting the Rebbe was the exact opposite of what I just described.”