Hopes for the achievement of worldwide prosperity have dimmed.
For a brief, glorious, and unforgettable moment 20 years ago, it seemed as if a great and terrible question that had been perennially stalking humanity had finally been answered. That profound question was as old as human hope itself: could ordinary men and women, regardless of their location on this earth or their station in this life, hope that deliberate social arrangements could provide them—and their descendants thereafter—with permanent and universal protection against the grinding poverty and material misery that had been the human lot ever since memory began? For those exhilarating few years back in the 1990s, it seemed to many of us that the 20th century had indeed answered this age-old question: decisively, successfully, and conclusively.
Brute facts, after all, had demonstrated beyond controversy that human beings the world over could now indeed create sustained explosions of mass prosperity—rather than temporary and transient windfalls—that would utterly transform the human material condition, relegating the traditional conception of desperate want from a daily personal concern to an almost abstract textbook curiosity.
According to estimates by the late economic historian Angus Maddison, the world’s average per capita output quadrupled between 1900 and 1989/91, with even greater income surges registered in the collectivity of Western societies where the process of modern economic growth had commenced.1 Membership in this “Western” club, though, manifestly did not require European background or heritage, for the Asian nations of Japan, South Korea, and Taiwan had come to embrace political and economic arrangements similar to those pioneered in Western Europe and its overseas offshoots, and had in fact enjoyed some of the century’s fastest rates of long-term income growth.
The formula for generating steady improvements in living standards for a diversity of human populations, in short, had been solidly established. The matter at hand was now to extend that formula to the reaches of the earth where it could not yet be exercised—most obviously at that time for political reasons, given the fact that nearly a third of the world’s peoples were still living under Communist regimes in the late 1980s.
By the early 1990s, with the final failure of the Soviet project and the widely heralded idea of the “End of History,” it suddenly seemed as if the liberal political ideals that promoted the spread of the Western growth formula would no longer encounter much organized global resistance. It now seemed only a matter of time until every part of the world could join in a newly possible economic race to the top. Prosperity for all—everywhere—no longer sounded like merely a prayer. Quite the contrary: the end of global poverty was increasingly taken to be something much more like a feasible long-term-action agenda.
Alas, in the years since, new brute facts have asserted themselves, while other awkward facts of somewhat older vintage have reasserted themselves, demanding renewed attention. All too many contemporary locales have managed to “achieve” records of long-term economic failure in our modern era. The plain and unavoidable truth is that countries with hundreds of millions of inhabitants today are not simply falling behind in a global march toward ever-greater prosperity: they are positively heading in the wrong direction, spiraling down on their own distinct, but commonly dismal, paths of severe, prolonged, and tragic retrogression.
Haiti is a particularly awful case in point.
The Case of Haiti
Conditions of life in Haiti, wretched for most Haitians even in the best of times, took a sharp turn for the worse earlier this year, when an earthquake measuring 7.0 on the Richter scale struck not far from the capital of Port-au-Prince. The resultant carnage was heartrending; the chaos, stomach-churning. At this writing, the official estimate of the death toll from the disaster has risen above 200,000—although it is a telling sign of Haiti’s sheer underdevelopment that an exact death count from the earthquake and its aftermath is regarded by foreign relief workers on the scene as an utterly unrealistic proposition.
Yet there was absolutely nothing “natural” about the human cost of this natural disaster. Massive earthquakes do not always unfold as calamities of biblical proportions, even when they are visited on major urban population centers. In October 1989, a massive earthquake suddenly struck the Bay Area of California. In sheer magnitude, that earthquake was almost as violent as Haiti’s (6.9 vs. 7.0); its epicenter was roughly as far from downtown San Jose as Haiti’s was from central Port-au-Prince. The final death toll in the Bay Area tragedy: 63 lives.
At first glance, such wildly disparate death counts in the face of arguably comparable natural calamities may seem to serve as a grim metaphor for the seemingly perennial yawning gap that separates life chances in rich and poor regions today. In reality, however, the backstory is still sadder than these raw numbers might of themselves suggest: for the awful fact of the matter is that the United States and Haiti are societies whose capabilities for meeting human needs (and protecting human beings) have been moving in fundamentally different directions for decades.
A society’s material capabilities for meeting human needs are very broadly indicated by its levels and trends in per capita output (GDP). America is not one of the modern world’s most rapidly growing economies—over the past century, in fact, per capita growth has averaged a little under 2 percent a year—but thanks to the power of compound interest, such a tempo of growth brings dramatic and salutary transformations over time, if it can be sustained. In the roughly six decades between 1950 and 2008, indeed, America’s per capita output more than tripled. But over that same half century or so, by Maddison’s reckoning, per capita output in Haiti actually declined—by more than a third.
Thanks to its prolonged economic retrogression, Haiti today is not simply immiserated; it is in fact substantially poorer than it was half a century ago. By the hardly insignificant yardstick of income levels, the country appears to be less developed now than it was two generations before. (Appalling death tolls in the face of earthquakes, tropical storms, and other forces of nature are merely one manifestation of the more general deterioration in material capabilities for meeting human needs that are implied by such trends.)
Haiti, moreover, is only one of many countries in the modern world to have been heading down—not up—in economic terms for decades on end. Summary statistics from the World Bank and the World Trade Organization (WTO) outline the dimensions of this global problem.
By the World Bank’s calculations, nearly two dozen countries suffered negative per capita economic growth over the course of the quarter century from 1980 to 2005. And the World Bank does not even attempt to estimate economic trends for a number of national problem cases—Kim Jong-il’s North Korea and Robert Mugabe’s Zimbabwe among them—where pronounced and prolonged economic decline have almost certainly taken place. When one tallies up the global totals, it would appear that close to half a billion people today live in such countries—societies beset not merely by long-term stagnation but also by a quarter century or more of absolute deterioration in income levels.
At the same time, WTO numbers point to a jarring drop in the long-term export performance of many contemporary societies. Adjusting for inflation, these WTO data suggest that more than 30 countries were actually earning in real terms less from merchandise exports in 2006 than they did in 1980, over a quarter century earlier. The picture is still worse when we take intervening population growth into account. Real per capita export revenue, measured in U.S. dollars, looks to have been lower in more than 50 countries in 2007 (the last year before the current worldwide economic crisis) than in 1980. In all, such places today account for roughly three quarters of a billion of the world’s 6.8 billion current inhabitants—about a ninth of the globe’s total population.
Thus, it is not just that an appreciable swath of humanity today lives in countries that have not yet managed to customize, and apply, the global formula for sustained growth that has been propelling the rest of the world out of poverty and into material security, or even affluence. No—hundreds of millions of people in the modern world live in places where the development process is manifestly stuck in reverse.
For these hapless societies, pronounced and relentless economic failure is not an awful aberration but rather the seemingly “natural” way of things: the only way things have ever been in living memory for most locals, and most international observers. After all, the median age of the world’s present population is less than 30 years; this means that most people today can recall only long-term economic failure for these dozens of countries.
National examples of prolonged economic failure dot the modern global map: in the Caribbean (Cuba, Haiti); in Latin America (Paraguay, Venezuela); even in dynamic East Asia (North Korea). But the epicenter of prolonged economic failure is sub-Saharan Africa.
The Case of Sub-Saharan Africa
Sub-Saharan Africa comprises an extraordinary diversity of peoples, and the economic records of each of the region’s 50-plus countries is separate and distinct. Yet taken together, their overall development record in the post-colonial period has been utterly dismal.
Some improvement in the region’s economic performance has been registered since the mid-1990s. Even so, according to estimates by both Angus Maddison and the World Bank, per capita income for the region as a whole was slightly lower in 2006 than it was in 1974. Much the same holds true for real per capita export earnings. According to the WTO’s numbers,Africa’s overall per capita merchandise export revenues, adjusted for inflation, showed absolutely no improvement between 1974 and 2006—and after the global economic crisis, they appear to have been around 10 percent lower in 2009 than they were in 1974.
This is very bad news for a very large number of people: as of last year, according to U.S. Census Bureau projections, sub-Saharan Africa’s population was well over 800 million people, roughly one-eighth of all human beings on earth today.
The sub-Sahara is not simply an epicenter of economic failure; it is also the epicenter of a pervasive failure in what might be called human development. Poorer countries, of course, tend to suffer from poor health and education as well, and sub-Saharan Africa is by far the poorest region of the planet today. But it is not just that Africa’s health and educational profiles are much worse than for any other major region of the world; they are also markedly worse than would be predicted on the basis of the region’s woeful economic performance alone.
Consider life expectancy at birth—the single best summary measure of a population’s overall health conditions. Sub-Saharan life spans today are on average roughly 10 years lower than in other countries with comparable starting points in health four decades ago and comparable income levels today. This awful result has much to do with the HIV/AIDS tragedy, which to date has been concentrated in Africa and has sent life expectancy in some sub-Saharan societies plummeting.
But analogous patterns are evident for educational attainment in the sub-Sahara—trends that cannot be traced so easily to the unpredictable outbreak of communicable pandemics. Through painstaking effort, Robert J. Barro of Harvard and Jong-Wha Lee of Korea University have compiled a database detailing changes in adult educational levels in more than 100 countries for the years 1960 to 2000.2 For the world as a whole, average years of schooling for a country’s adult population as of the year 2000 can be pretty accurately predicted by the country’s level of adult education 40 years earlier and its income level at the end of the intervening period.
Here again, sub-Saharan educational profiles in 2000 were even more modest than the region’s very low income levels would have of themselves predicted: to go by World Bank data, this “sub-Sahara effect” amounted to an average of 1.2 years of schooling forgone for each and every person 15 years of age and older. In a region where adult men and women had an average of just 3.5 years of schooling as of 2000, this would have been a far from trivial loss; on the contrary, it suggests that sub-Saharan Africans would have enjoyed fully one third more years of adult education, its low income levels notwithstanding, if only they had been living in a place more like other regions of the Third World.
A Poor-Friendly Era
The problem of sustained socioeconomic retrogression is all the more dismaying, and puzzling, when one bears in mind the phenomenal explosion of prosperity that has transformed the world as a whole in the modern era—and the potentialities for material advance that are afforded even the poorest societies.
In the half century between 1955 and 2005, by Maddison’s reckoning, the planet’s per capita income levels nearly tripled, growing at an average tempo of more than 2 percent per year, despite the unprecedented pace of population increase in the Third World over those same years. The expansion of international trade—and thus by definition, of markets for export produce—was even more dramatic: on a worldwide basis, real per capita demand for international merchandise and commodities jumped almost tenfold during those same years.
Scientific and technical advances have immensely improved life prospects in the planet’s poorest and least scientifically proficient reaches. Thanks largely to progress in life sciences and public-health know-how and the concomitant spread of basic education, longer lives are now possible worldwide at ever lower national-income levels. No country on earth registered a female life expectancy at birth of 65 years before the end of World War I; the first society to breach that threshold was apparently New Zealand, somewhere around 1920. Today average female life expectancy at birth for poor countries as a whole is well above 65 years. Even places like Nepal are thought to have reached this once-impossible level of life expectancy—and Nepal does this today on less than a fifth of New Zealand’s income level circa 1920.
There should be no doubt whatsoever that the health revolution facilitated by the postwar era’s knowledge explosion, and all that has accompanied it, has been fundamentally “poor-friendly.” In the early 1950s, by the estimates of the UN Population Division, life expectancy at birth was 25 years higher in the more developed regions than in the less developed regions; 50 years later—despite the AIDS catastrophe—that differential had been cut in half. By this most basic measure of all, inequality between rich and poor has by no means increased; rather, during our era of modern global economic development, it has been shrinking, progressively and dramatically.
The worldwide surge in prosperity over the past two generations has been nothing like the winner-take-all race that some insinuate it to be. The plain fact is that countries at every income level have benefited tremendously from the global economic updrafts of our modern age. World Bank estimates underscore this point. If we take high-income economies completely out of the picture, average real per capita output for the rest of the world more than tripled between 1960 and 2006. (By Maddison’s calculations, incidentally, per capita incomes in Brazil, Mexico, and Turkey are higher than they were in Scandinavia and the Netherlands in the early 1950s.)
For the “low middle income economies” (countries including China, Egypt, India, and the Philippines), estimated per capita incomes rose more than fivefold. And even for the “low income economies” as a whole—the 1.3 billion people in the world’s poorest contemporary societies—per capita output is thought to have risen by almost 150 percent over those same years.
Salutary political changes—including what the late Samuel Huntington termed “waves of democratization”—have swept through the less developed regions over the past two generations. But First World levels of institutional and administrative acumen are by no means necessary for sustained economic growth in poor countries today. In fact, the political and policy prerequisites for eliciting enormous improvements in local incomes may be less exacting in our modern era than ever before.
A few examples will suffice to make this point. Take Bangladesh, a country widely written off as a hopeless basket case at its independence in 1971.Political stability has not exactly been Bangladesh’s métier: over the past four decades, the country has experienced dozens of attempted political coups, three of which overturned the seated government. Bangladesh still does not qualify as an open society or a full-fledged democracy; Freedom House, for example, rated the country as only “partly free” earlier this year. Yet despite all this, per capita output in Bangladesh has roughly doubled since the early 1970s, according to both Maddison and the World Bank’s World Development Indicators (WDI).
The case of the Dominican Republic may be even more instructive. In 1961, the country’s longtime dictator, Rafael Trujillo, was assassinated. A period of political instability ensued; in 1965, U.S. troops had to occupy that country for a year to restore order. In the decades that followed, the country’s “economic climate” might at best be described as mediocre: the country ranked 99th on the 2009 Corruption Perceptions Index, and 100th on the Fraser Institute’s 2009 Index of Economic Freedom. Yet over the four decades between 1965 and 2005, per capita income in the Dominican Republic more than tripled, increasing over these years at an average pace of almost 3 percent per annum. Between the early 1960s and the early 2000s, moreover, overall life expectancy in the Dominican Republic jumped by nearly two decades: today, according to the U.S. Census Bureau, it stands at 74 years—just four years behind that found in the United States.
The Dominican Republic’s progress in economic development is noteworthy in its own right—but it is all the more striking when juxtaposed against the gruesome and prolonged developmental failure still underway in Haiti. The two countries, of course, share the Caribbean island of Hispaniola.
So, given the pervasive scope and scale of worldwide economic advance in our age—and the apparently increasing ease of achieving sustained economic progress, even for populations at the lowest levels of material attainment—how are we to explain, and deal with, the phenomenon of persisting socioeconomic failure in Haiti and dozens of other contemporary societies? How have these places managed to avoid self-enrichment, given the apparently increasing worldwide odds against such an outcome? And what can be done to end the syndrome of developmental decline on the lands that have been subject to it?
One diagnosis, insistently tendered in some parts of the academy and the international community, pegs the problem as a sheer insufficiency of foreign aid; the correlative prescription from these quarters—a lot more of it. Currently, the most vocal and articulate advocates of this point of view are Jeffrey Sachs of Columbia University and the United Nation’s Millennium Development Goals (MDG) project. The MDG project avers that the primary impediment to more rapid progress against poverty in low-income countries nowadays is the lack of funding for practical, tested programs, and policy measures that would reliably and predictably raise living standards in the world where they are lowest today. Sachs and the UN’s MDG apparatus consequently urge an immediate doubling of official Western-aid transfers to low-income areas and offer a detailed array of plans for absorbing these proposed additional flows (which are envisioned at almost $190 billion a year above “baseline” levels by the year 2015).3
The trouble with this narrative is that foreign aid is not exactly an untested remedy for global poverty in our day and age. To go by figures from the Organisation for Economic Cooperation and Development, total flows of development assistance to recipient countries since 1960, after adjusting for inflation, by now add up to something like $3 trillion.
Now, in some places and times, international aid appears not only to have enhanced material advance but also to have promoted the transition to self-sustaining growth (i.e., growth without aid). Aid transfers seem to have been most productive in the hands of governments that supported economically productive policies and practices. But foreign aid quite clearly is neither necessary nor sufficient to elicit growth and development in our modern era—nor is it even capable of preventing long-term economic retrogression in recipient states. In today’s dollars, Haiti has received more than $10 billion since 1960 in official development assistance alone (and vastly more if private aid, humanitarian assistance, and security assistance are taken into account). On a per capita basis, this works out to more than four times as much assistance per capita as Western European populations received during the Marshall Plan era. Yet Haiti’s per capita income, according to Maddison, was less than two-thirds as high in 2008 as it had been in 1960.
Similarly, since 1970, sub-Saharan African states have taken in the current equivalent of more than $600 billion of official development assistance—over three times as much aid on a per capita basis as Marshall Plan states received. As we know all too well, these subventions neither forestalled long-term economic decline for the region as a whole nor prevented the rise of poverty in many “beneficiary” states in the sub-Sahara.
How does one account for these inconvenient facts? Evidently, by ignoring them. To make their case for aid as the necessary remedy for contemporary global poverty, proponents of the Sachs-MDG plan are willing to undertake breathtaking, even patently absurd, intellectual contortions. Thus the plan’s overview document asserts, without any hint of irony, that “many well-governed countries [today] are too poor to help themselves.”4 Social-science and policy-research literature, to be sure, has committed a fair share of howlers during the past century, but this may be the single most empirically challenged sentence of the new millennium.
The too-little-aid theory in essence attempts to explain—or blame—the prolonged economic failure of large portions of the modern world on external factors (in this case, the stinginess of affluent Western populations). A much more plausible explanation, however, relates to domestic factors within the countries and societies in question. Perhaps most important, these concern the deep, complex, historically rooted, and interconnected issues of “culture” on the one hand and what is now called “governance” on the other.
Culture and Governance
The proposition that a local population’s viewpoints, values, and dispositions might have some bearing on local economic performance would hardly seem to be controversial. Decades ago, the great development economist Peter Bauer wrote that “economic achievement depends upon a people’s attributes, attitudes, mores and political arrangements.” The observation was offered as a simple and irrefutable statement of fact, and it would still be unobjectionable today to most readers who have not been tutored in contemporary “development theory.” But for development specialists, discussion of “culture”—much less its relationship to such things as work, thrift, savings, entrepreneurship, innovation, educational attainment, and other qualities that influence prospects for material advance—is increasingly off-limits.
In the erudite reaches of development policy, indeed, discussion of such matters at all is often regarded as poor form at best—and at worst is taken to smack of condescension, paternalism, or even latent prejudice. Paul Collier’s bestselling 2007 exposition, The Bottom Billion, is a case in point.5 Remarkably, Collier manages to complete his opus without ever referring to cultural impediments to economic progress in the world’s poorest and most economically stagnant societies. In fact, he utters the world culture only once—and that once as a reference to the contending worldviews and approaches of various parties involved in international-aid negotiations.
To be sure, the record of historical efforts to predict and explain economic performance on the basis of cultural attributes is, let us say, checkered. Up through the 1950s and even into the early 1960s, for example, researchers and self-styled experts were offering confident and detailed explanations of why “Confucian values” constituted a serious obstacle to economic development in East Asia. A decade or so later—after the huge boom all around the East Asian rim was well underway—the profession was still united in the consensus that the Confucian ethos mattered greatly in economic performance, but they had quietly shifted their estimate of that impact from negative to positive.
This gets us to the crucial issue of governance—which is shaped by, and in turn independently shapes, local attitudes, expectations, and motivations. Throughout the reaches of the world characterized by long-term economic failure, governance has generally been abysmal. Violent political instability and predatory, arbitrary, or plainly destructive state practices have shaken, or sometimes altogether destroyed, the institutions and legal rules upon which purposeful individual and collective efforts for economic betterment depend. In a few spots on the map—such as North Korea—pronounced economic failure is due to “strong states”: monster regimes that starve their subjects as a matter of principle or ideology, given their own twisted official logic. For many more of today’s failed economies, the trouble instead is that governance has been the charge of “weak” states or even “failed states”: polities with extremely fragile capabilities, sometimes lacking the ability to maintain order or guarantee their subjects’ physical security at all (think Liberia, Sierra Leone, Somalia).
In these wretched locales, economic failure and continuing developmental decline are unlikely to be arrested absent some serious successes in state-building. But how, exactly, does one proceed with that task? As Francis Fukuyama, who was studied the history of state-building, has cautioned, even under the best of circumstances, the quest to forge sturdy, competent, and trusted state apparatuses promises to be difficult, risky, and time-consuming ventures in these places—and expensive to boot.6 Yet state-building is still hardly even on the agenda of the international-aid community, where moving Western “development” money to stricken regions assumes a much higher administrative priority. Scarcely less important, the challenges of state-building today are compounded by the burdens of history. In South Korea, state-building, from today’s perspective, looks to have been a relatively undemanding mission, difficult as it was: Korea was a nation with a tradition of self-rule under a fairly sophisticated indigenous administrative system for a people with a long civilization and their own written language. In sub-Saharan Africa today, apart from South Africa, the only country that can be similarly characterized is perhaps Ethiopia. But even self-rule is no guarantee that state-building will be easy. Haiti, for example, has enjoyed more than two centuries of formal political independence.
If state-building is the precondition for any real hope of ending the prolonged economic failure and enduring poverty of the hundreds of millions of people currently condemned to this fate in the modern world, the precondition to state-building looks, quite unavoidably, to be foreign intervention—and quite possibly, sustained foreign intervention.
Unfortunately, in the wake of America’s unpopular and in many ways bungled intervention in Iraq, such a prospect is if anything even less palatable for the Western governments that might undertake it than it would have been before the Iraq war. Given sensitivities about their own past colonial activities, postwar voters in Japan and most of Europe have always been reluctant to send troops abroad on indefinite latter-day “civilizing missions.” For example, public support in those countries for the existing, arguably modest, state-building mission currently underway in Afghanistan is tenuous, and any broader commitment to such an international objective simply is not in the cards, now or in the foreseeable future. A number of development economists who recognize the imperative of state-building (not that they would call it by that name) have proposed intriguing schemes for promoting security in poor regions through outside interventions. Collier, much to his credit, flatly states that “external military intervention has an important place in helping the societies of the bottom billion” and argues that “these countries’ military forces are more often part of the problem than a substitute for external forces.” In fact, he devotes the better part of a chapter of his tome to hypothesizing just how the European Union could be encouraged to provide “credible guarantees of external military intervention” to prevent coups in democratically elected Third World governments.7 Paul Romer, the father of modern economics’ “new growth theory,” floats the idea of “charter cities” protected by international security arrangements to which impoverished inhabitants in violent and lawless environments could migrate to enjoy the protections of person, property, and pragmatic rule. Such ideas, unfortunately, are only thought experiments—with little chance of moving off the shelf of theory and into practice, barring a tremendous change in the norms by which international relations are today conducted.
So where does this leave us?
On the one hand, the formula for achieving sustained long-term economic growth on a national basis has pretty clearly been developed, if not perfected—and applying this formula looks to be easier than ever before in human history. Most people, moreover, live in countries that have accepted the arrangements to undergird this growth formula—some by deliberately and enthusiastically embracing them, others by more inadvertently stumbling upon them. Barring global catastrophe—some unforeseen worldwide conflagration or environmental debacle—these populations in general can expect their descendants to enjoy higher incomes and greater affluence than they themselves have ever known. Moreover, thanks to what the economic historian Alexander Gerschenkron described as “the advantages of backwardness,” untapped technological and economic potentialities provide the poorer populations in this group with the possibilities of even more rapid growth than those facing the richer world.
On the other hand, many hundreds of millions of people—a fraction of humanity that may rise, not fall, in the years immediately ahead—cannot avail themselves of the basic political arrangements that set the global growth formula into action. For now, and for the foreseeable future, these miserables can look forward only to relative economic decline—or even further absolute decline, difficult as that may be to imagine.
Nearly half a century ago, Peter Bauer warned presciently that “if attitudes, mores and institutions uncongenial to material progress have prevailed for long historical periods, with corresponding effects on material advance, it may be difficult to reverse their effects except after long periods.” We are living in the world Bauer prophesied. Global prosperity for all is not yet at hand—and, painful and indeed shocking as this may be to recognize, the day in which all humanity can expect to be included in the march toward ever greater affluence cannot be foreseen with any confidence.
1 Angus Maddison, “Statistics on World Population, GDP and per Capita GDP, 1-2008 AD” (March 2010), available electronically at http://www.ggdc.net/maddison.
2 Robert J. Barro and Jong-Wha Lee, “International Data on Educational Attainment: Updates and Implications” (Harvard Center for International Development Working Paper No. 42, April 2000)—Appendix Data Tables, available electronically at http://www.cid.harvard.edu/ciddata/ciddata.html.
3 Investing in development: a practical plan to achieve the Millennium Development Goals/UN Millennium Project, Jeffrey D. Sachs, Director (New York: United Nations Development Programme, 2005).
4 Sachs, p. 17.
5 Paul Collier, The Bottom Billion: Why the Poorest Countries Are Failing and What Can Be Done About It (New York: Oxford, 2007), p. 124.
6 Francis Fukuyama, State Building: Governance and World Order in the 21st Century (Ithaca: Cornell, 2004).
7 See, for example, Paul Romer, “For Richer, for Poorer,” Prospect (London), no. 167, February 2010, available electronically at http://www.prospectmagazine.co.uk/2010/01/for-richer-for-poorer/.
Choose your plan and pay nothing for six Weeks!
The Global Poverty Paradox
Must-Reads from Magazine
Preening doesn't work.
Donald Trump’s demagogic rhetoric on the media is dangerous and un-American. When he describes reporters and editors as “enemies of the people,” or when he chuckles at Rodrigo Duterte’s remark that the media are “spies,” the president wounds the dignity of his office and America’s already-infirm civic health. The question is what the media should do to check the president’s rhetorical excesses.
One answer is for America’s mainstream newsrooms to tell the American people that reporters are not, in fact, the enemy and the president should cut it out with his anti-media crusade. For all the commercial pressures they face today, American journalists are still perched by prominent windows overlooking the national public square, which means they still get a hearing from the people down below when they wish.
I suppose that was the bright idea behind Thursday’s simultaneous publication of pro-media editorials in the editorial pages of 350 newspapers across the country. Participating outlets included national papers like the New York Times (naturally) as well as scores of regional ones, plus a few magazines and professional societies. If you can bring yourself to wade through one dull editorial after another, by all means: CNN has links to all 350.
But our mostly liberal colleagues in the edit-page business are fooling themselves if they imagine that this latest national teach-in will foster greater trust in the media, especially among the millions of Americans who in 2016 registered their discontent with the country’s establishment in toto by sending a vulgarian from Queens to the Oval Office. Those Americans—and not all of them were die-hard Trumpians—had had it with a prestige press that too often saw itself as an adjunct to the liberal cause rather than the cause of truth. They had had it with the subtle and not-so-subtle biases, the servility to liberal politicians, the contempt for their cherished beliefs and condescension for their ways of life.
To regain that trust, it will not do to condemn and condescend some more.
Jesse Brenneman, a New York City radio producer, understands this perfectly. In an ongoing satirical video series posted to Twitter, Brenneman documents his mockumentary-style road trip in “Trump country.” The aim is supposedly to understand the frustrations that led to Trump. But Brenneman mostly ends up yelling at the Trumpians from his driver’s seat, as his car zooms through regions like Central Pennsylvania: “WHY DID YOU DO IT? WHY DID YOU DO IT? WHY DID YOU DO IT? WHY DID YOU VOTE FOR HIM? WHY? WHY? WHY? IS IT ABORTION? DO WE NEED TO HAVE FEWER ABORTIONS? WHY DO YOU WATCH FOX NEWS? IT’S NOT TRUE, MUCH OF IT!”
Brenneman is gently chiding his own media comrades. But the 350 editorials amount to a self-serious version of the same thing: Trust us. We’re the media. What’s wrong with you!
Regaining trust requires something else. It requires factually sound reporting and the pursuit of the truth wherever it leads. It calls for reporters who conduct themselves professionally on social media, who don’t give vent to their anti-conservative animosities at every turn. And pundits who don’t change their minds about an issue merely because they find themselves on the same side as Trump. In short, it requires a return to journalism basics.
That’s hard work. Hectoring editorials are easier.
Choose your plan and pay nothing for six Weeks!
Yesterday, today, and tomorrow.
Andrew Cuomo, the governor of New York, told a stunned crowd on Wednesday that the United States of America “was never that great.” He followed that flat-footed line with a series of bromides about how America will “reach greatness” when mankind ceases to stereotype, discriminate, and degrade one another, but the damage was done. Cuomo’s primary opponent, the progressive insurgent and former actress Cynthia Nixon, mocked the governor for failing in the attempt to mimic “what a progressive sounds like.” That is a telling admission. Presumably, Nixon’s idealized “progressive” would more adroitly explain why American greatness is overstated.
You might think that President Donald Trump would take the opportunity presented by Cuomo’s faceplant to wrap himself in the flag, but he opted only to mock the Empire State’s executive for “having a total meltdown.” The president’s instincts are equally revealing. After all, the phrase “Make America Great Again” concedes that America is, at present, not all that great. This is an earnest conviction on Trump’s part.
In accepting the GOP presidential nomination, Trump painted a portrait of a country that was weak and failing. Shackled by political correctness, riddled with violent crime, beset by dangerous migrants and violent refugees, subverted by craven politicians, and plagued by a crisis of confidence in its mission; Trump’s vision of the country was best summed in the most memorable line from his first inaugural address: “American carnage.” Just 19 months later, the president insists that the nation has been made whole again, which is more a function of his competence than the national character.
These two provisory expressions of patriotism share more commonalities than distinctions. They are qualified, transactional, and contingent upon subjective assessments of shifting circumstances. Everyone has their own definition of patriotism, and love of country should not be blind. Unwavering reverence is an expression of faith, not gratitude. Patriotism must know prudent limits, or it may come to justify venality and violence. But patriotism is distinct from an understanding of what makes the United States a great and exceptional nation.
American greatness is established in its Constitution. The nation’s founding charter endures because of two conditions that prevailed at the close of the 18th Century. First, the collection of sovereign states that hammered out a national government was careful to premise a prospective Union on decentralization and federalism. That diffusion preserves local social and legal customs and, thus, domestic harmony. Second, the Constitution’s framers operated on the assumptions espoused by the Enlightenment’s leading luminaries, among them Lockean notions of legitimacy derived from the consent of the governed. These two assumptions led James Madison to conclude in Federalist 51 that “the rights of individuals, or of the minority, will be in little danger from interested combinations of the majority” even while “all authority in it will be derived from and dependent on the society.”
It was also in Federalist 51 in which Madison articulated a truth about human nature that has vexed prideful technocrats since the dawn of time: Mankind is flawed. The species cannot be perfected. Thus, “Ambition must be made to counteract ambition.” The revolutionary movements that followed America’s founding held this capitulatory revelation in low esteem. They sought to create “ideal” societies in which mankind’s contradictions and baser impulses would dissolve into a new social consciousness. It is no coincidence that those “ideal” revolutionary societies eventually descended into bloodshed, oppression, and disunion while America endured.
The Constitution’s amendments are equally exceptional. With a few lamentable deviations, the amendments are a set of negative rights that proscribe governmental action rather than establish that which the government can do. That is a paradigmatic triumph; it established as America’s baseline ethos the idea that human freedoms not expressly enumerated in the Constitution are implied. They do not flow from the beneficence of some far-off potentate. They are God-granted. The concept of unenumerated rights is as revolutionary today as it was in the 18th century, and it remains an alien notion outside the Anglophonic world.
America is capable of astonishing violence and repression, but it equally adept at reconciliation and renewal. That capacity is rooted in Americans’ remarkable facility for compromise. The story of the United States is, in many ways, a story of compromise, and not all of those compromises are worthy of celebration. The facility Americans have for negotiation and concession has, however, forged a government and kept it. It is what has made the United States the most successful experiment in cultural intermixing in human history. It is what fortifies its incredible capitalist dynamism. And its commerce remains the greatest vehicle for achieving equality, meritocracy, and human flourishing ever devised.
So much of what America’s critics lament about the country’s inherent flaws—its hostility toward collectivism, the ruthlessness of its entrepreneurial spirit, its manic bouts of isolationism and extroversion on the world stage, and the tensions between old and new immigrants—are outgrowths of the traits that make it extraordinary. The nation’s commitment to pluralism, egalitarianism, and unity around shared principles rather than cultural, tribal, or subnational bonds is what makes America unique among nations. It will never stop striving to achieve the ideals of its founding; ideals are, after all, often unattainable. But its shared creed is the North Star toward which the United States has looked for a quarter millennium.
All these things that make America great are hardly immutable traits, and some careless future generation may one day abandon them. But despite America’s weakness for fad and experimentation, those fundamental tenets have proven resistant to change. As Jonah Goldberg observed in Suicide of the West, Thomas Jefferson’s assertion that “all men are created equal” cannot be improved upon. Any effort to amend that claim would be a regression to a more primitive state. That and the many other gifts that the founding generation left behind ensured that the United States was a uniquely magnificent nation on day one. Don’t let any politician tell you otherwise.
Choose your plan and pay nothing for six Weeks!
The limits of religious liberty.
Jack Phillips once more finds himself on the sharp end of liberal “tolerance.” He was the Colorado baker at the center of the Masterpiece Cakeshop case, the one who in 2012 refused to bake a cake for a same-sex wedding. A state civil-rights commission censured Phillips and ordered him to undergo ideological retraining. But a 7-2 majority of the U.S. Supreme Court found that the commission had exhibited such overt hostility to Phillips’s religious views as to have violated the state’s “obligation of religious neutrality” under the First Amendment.
But it appears the commission didn’t get the message. The Alliance Defending Freedom, which represented Phillips in the original case, reports:
On June 26, 2017, the same day that the Supreme Court agreed to take up Masterpiece Cakeshop v. Colorado Civil Rights Commission, an attorney asked Phillips to create a cake designed pink on the inside and blue on the outside, which the attorney said was to celebrate a gender transition from male to female. Phillips declined the request because the custom cake would have expressed messages about sex and gender identity that conflict with his religious beliefs. Less than a month after the Supreme Court ruled for Phillips in his first case, the state surprised him by finding probable cause to believe that Colorado law requires him to create the requested gender-transition cake.
This time, however, Phillips and the ADF are taking the fight to the state. On Tuesday, the ADF filed a lawsuit against Colorado Governor John Hickenlooper and the members of the commission, alleging anti-religious bullying and harassment of Phillips aimed at ruining his business and livelihood.
Many religious conservatives see this new case as an opportunity to “firm up” the Court’s Masterpiece holding. If it makes it to the Supreme Court, especially one with a Justice Kavanaugh, there is a good chance that Americans will end up with sturdier protections against illiberal liberalism than former Justice Anthony Kennedy’s whimsical jurisprudence permitted.
But by my lights, the renewed persecution of Phillips also reveals the limits of “religious liberty” as a sword and organizing principle for the right. As I predicted when the original decision was handed down,
the inner logic of today’s secular progressivism puts the movement continually on the offensive. A philosophy that rejects all traditional barriers to individual autonomy and self-expression won’t rest until all “thou shalts” are defeated, and those who voice them marginalized. For a transgender woman to fully exercise autonomy, for example, the devout Christian, Muslim, or Jew must recognize her as a woman. People of faith and others who cling to traditional views must publicly assent to what they don’t believe.
And here we are. “Religious freedom,” without a substantive politics that offers a vision of the common good, can easily allow liberalism to frame traditional moral precepts as little more than superstitions best relegated to the private sphere of the mind. Under the banner of liberty, religious conservatives might win procedural victories here and there. But they will be cornered in the long-term.
Choose your plan and pay nothing for six Weeks!
Whatever Donald wants, he's gonna get it.
What do Republicans believe? Whatever Donald Trump tells them they should believe, it seems.
In survey after survey, self-described Republicans—admittedly a severely truncated demographic in the Trump era—are surrendering not just principle but common sense to whatever Trump needs them to say at the moment. The positions Republicans adopt to prop up the president are often so outside the American right’s traditional credo that it’s hard to believe they’re being honest.
According to a June Axios-sponsored SurveyMonkey poll, a whopping 92 percent of Republicans believe the conventional press deliberately runs with false or misleading stories. That’s not especially surprising. Republicans have a long-standing grievance with the mainstream media, and nearly three-quarters of all respondents in this survey agree with them. What is unique and, frankly, disturbing is the apparent resolve of GOP voters to do something about it.
A Quinnipiac University survey released last week showed that a majority of Republicans agree with the Trump White House’s determination that the press is the “enemy of the people.” An Ipsos poll released around the same time confirmed that close to a majority of GOP voters believe “the news media is the enemy of the American people.” That same poll showed that a significant plurality of GOP voters—43 to 36 percent—think Trump should have the expressly unconstitutional authority to shutter media outlets with which he disagrees. Or, rather, “news outlets engaged in bad behavior,” whatever that means.
The GOP base also seems generally unfazed by Donald Trump’s bizarre rhetorical deference to Russian President Vladimir Putin. An Economist-backed YouGov poll in early July showed 56 percent of the GOP said that “Donald Trump’s relationship with Vladimir Putin is mostly a good thing for the United States,” while only 40 percent said that the United States should remain a member of the NATO alliance. In 2014, only 22 percent of Republicans thought of Russia as friendly toward or allied with the United States. Today, via Gallup, that’s up to 40 percent of Republicans.
Given that, it’s no surprise that 70 percent of self-identified Republicans broke with the vast majority of the public and gave the president high marks for his press conference alongside Putin, in which he disparaged his own Cabinet and intelligence officials and heaped praise upon the autocrat in the Kremlin.
Donald Trump’s rhetorical servility toward Putin contrasts greatly with his administration’s admirably hawkish posture toward Moscow, but don’t ask Republican voters to reconcile these contradictions. A July Fox News poll found that 57 percent of GOP voters think that Trump’s toughness toward Russia is “about right.” So, which is it?
“An attack on law enforcement is an attack on all Americans,” Donald Trump said to the applause of Republicans as he accepted the party’s presidential nomination. Ever since, the president has occupied his time attacking law enforcement, and Republicans are with him all the way.
Seventy-five percent of Republicans in a recent poll agree with the president that the special counsel’s office established by a Trump-appointed deputy attorney general is conducting a “witch hunt” targeting him and his allies. This position concedes that the 12 Russian nationals, 13 Russian intelligence officers, and five Americans who pleaded guilty to various crimes as a result of Robert Mueller’s work retain the president’s full faith and confidence. But perhaps that conclusion takes the average Republican voter literally and not seriously.
More seriously, six in ten Republicans tell pollsters that they believe the FBI is actively trying to frame the President of the United States for a crime. Logically, then, it stands to reason that most in the GOP believe that law enforcement is a politicized institution that is waging an underhanded campaign to de-legitimize an election and carry out something akin to a coup. In February, Reuters/Ipsos found that 73 percent of Republicans believe just that. But if the coup narrative were true and an existential threat to the foundations of the Republic had been uncovered, would Republicans really behave as they are—placidly allowing Democrats to out-raise, out-organize, and out-campaign GOP candidates consistently for over 18 months?
Even in trifling matters in which the stakes are so low that they hardly merit the effort it takes to lie—like the president’s baseless claim that “between 3 million and 5 million people voted illegally in the 2016 presidential election,” thus robbing Trump of a popular vote victory in 2016—a majority of Republican voters are willing to compromise themselves. And only to spare the president from the shame of trivial embarrassment.
Some contend that these results are an outgrowth of the fact that voters have deciphered the pollster’s game. Respondents are savvy enough to know when survey-takers are genuinely trying to take the public’s temperature on an issue and when they are merely seeking to exacerbate tensions within the GOP camp. Thus, this line of reasoning goes, respondents who support Trump are more likely to answer questions in a way that demonstrates their fealty toward the president even if they don’t necessarily hold that position. In other words, these are all lies. Maybe that’s true, but it’s cold comfort. The lies we tell ourselves become our truth if we tell them often enough.
Choose your plan and pay nothing for six Weeks!
arly in the morning of July 19, after eight hours of debate, the Knesset passed by a vote of 62–55 (with two abstentions) a law codifying Israel’s status as the national home of the Jewish people. First introduced in 2011 by the centrist Kadima Party, the so-called nation-state bill joined more than a dozen “Basic Laws” that now function as Israel’s unwritten constitution. Its 11 paragraphs mostly restate long-operative principles of Israeli democracy: Hebrew is the national language, “Hatikvah” is the national anthem, the menorah is the national emblem, Jerusalem is the nation’s capital, and Israel is where the self-determination of the Jewish nation is exercised.
One might find it surprising that such generalities would provoke a global outcry. Then again, Israel and selective indignation seem to go together like peanut butter and jelly. Criticisms run the gamut from saying the law is unnecessary and provocative to saying it’s racist and anti-democratic. The Israeli left, in alliance with Israel’s minority Arabs and Druze, has marched in the streets. Institutions of the Jewish Diaspora have called for the law’s repeal. They have found themselves, rather uneasily, on the same side of the debate as anti-Zionists and Israel-haters in the West Bank and Gaza Strip, in Muslim capitals, and in the EU and UN. “The spirit of Hitler, which led the world to a great catastrophe, has found its resurgence among some of Israel’s leaders,” said Turkey’s Recep Tayip Erdogan.
Leaving aside anti-Semites such as Erdogan, reasonable people and friends of Israel may disagree about the necessity and utility of the nation-state law. Such disagreement, however, ought to be based on facts. And facts have been sorely lacking in recent discussions of Israel—thanks to an uninformed, biased, and one-sided media. Major journalistic institutions have become so wedded to a pro-Palestinian, anti–Benjamin Netanyahu narrative, in which Israel is part of a global trend toward nationalist authoritarian populism, that they have abdicated any responsibility for presenting the news in a dispassionate and balanced manner. The shameful result of this inflammatory coverage is the normalization of anti-Israel rhetoric and policies and widening divisions between Israel and the Diaspora.
For example, a July 18, 2018, article in the Los Angeles Times described the nation-state law as “granting an advantageous status to Jewish-only communities.” But that is false: The bill contained no such language. (An earlier version might have been interpreted in this way, but the provision was removed.) Yet, as I write, the Los Angeles Times has not corrected the piece that contained the error.
On July 19, in the New York Times, David M. Halbfinger and Isabel Kershner wrote that the Knesset’s “incendiary move” had been “denounced by centrists and leftists as racist and anti-democratic.” Why? Because the law “omits any mention of democracy or the principle of equality.” But that is because other Basic Laws already have codified the democratic and egalitarian character of Israel, including two laws dealing specifically with human rights.
The nation-state law, the Times continued, also “promotes the development of Jewish communities, possibly aiding those who would seek to advance discriminatory land-allocation policies.” Put the emphasis on possibly, because there’s nothing in the law to provide such aid.
Indeed, the nation-state law contains no additional rights for Jews; nor does it promulgate fewer rights for Arabs. Halbfinger and Kershner went on to say that the law “downgrades Arabic from an official language to one with a ‘special status.’” But then, far into the piece, the writers also acknowledged that “it is largely a symbolic sleight since a subsequent clause says, ‘This clause does not harm the status given to the Arabic language before this law came into effect.’”
A July 22 front-page article in the Times by Max Fisher was headlined “Israel Picks Identity Over Democracy. More Nations May Follow.” This was a funny way to characterize a law that had won majority support, following parliamentary procedure, of a democratically elected legislative body. Such through-the-looking-glass analysis riddled this piece, as well as the additional four news articles and four op-eds the Times has published on the matter at the time of this writing. In these pieces, “democracy” is defined as “results favored by the New York Times editorial board,” and Israel’s national self-understanding is in irrevocable conflict with its democratic form of government.
Fisher’s “Interpreter” column began with an anecdote recalling how David Ben-Gurion “emerged from retirement in July 1967” and “insisted that Israel give up the territories it had conquered” after repelling the invasion of three Arab armies a month earlier. Unfortunately for Fisher, this dramatic episode seems to be apocryphal. Historian Martin Kramer, after exhaustive research, concluded, “There’s no evidence that Ben-Gurion warned Israelis that their victory ‘had sown the seeds of self-destruction,’ either in July 1967 or later.” Fisher stands by his story.
The questionable claims did not stop there. “The quality of Israeli democracy has been declining steadily since the early 2000s,” Fisher continued, an era that just happens to have coincided with the rise of Israeli statesmen whose politics he and the political scientists he cites find detestable.
Fisher also mentioned a “wave of horrific violence known as the Second Intifada, which killed far more Palestinians than Israelis, [and] included shocking terrorist attacks in previously safe Israeli enclaves.” But where did this violence come from? Who committed the shocking terrorist acts? It’s left unsaid.
Denying Arab agency is a longstanding habit of Israel’s critics. And that is what’s noteworthy about these often-hysterical reactions to the nation-state law: The stories use the legislation merely as a jumping-off point for larger complaints about Israel’s Jewish character. For these writers, this isn’t a debate over the Israeli flag. It’s a debate over Jewish nationalism and a proxy for the Israeli–Palestinian conflict.
In a July 24 “Ideas” piece for Time, Ilene Prusher wrote, “It’s not clear that the equality outlined in the founders’ vision statement”—that’s progressive-speak for “Declaration of Independence”—“remains a goal. It’s certainly far from reality.” Prusher continued, “The new law provides legal teeth for discrimination that is currently de facto” and, citing a left-wing law professor at Hebrew University, “essentially makes discrimination constitutional.”
No, it doesn’t, actually. Rather than speculate, the nation-state bill’s opponents might try examining the actual text, which says absolutely nothing about discrimination. As Eugene Kontorovich of Northwestern University said during a recent episode of the Jewish Leadership Conference podcast, “Anything can be perverted—but that does not mean everything is perverse.”
The truth is that democracy is thriving in Israel. So are many of the values one normally associates with (egad!) the New York Times. Last I checked, Israel is the one country in the Middle East where you can attend an LGBT Pride parade. Noah Ephron, a critic of the nation-state law, points out that the proportion of women serving in the Knesset is higher than in the U.S. Congress or average EU parliament. There is universal health care. “Alone among Western democracies,” Ephron adds, “labor unions have grown bigger and stronger in Israel over the past decade.” Minority citizens are guaranteed the same rights as Jewish ones. And it is precisely these achievements that are sustained by Israel’s Jewish character and traditions.
The Times quoted Avi Shilon, a historian at Ben-Gurion University, who said dismissively, “Mr. Netanyahu and his colleagues are acting like we are still in the battle of 1948, or in a previous era.” Judging by the fallacious, paranoid, fevered, and at times bigoted reaction to the nation-state bill, however, Bibi may have good reason to believe that Israel is still in the battle of 1948, and still defending itself against assaults on the very idea of a Jewish State.