The Hazard of Moral Hazard
In 1963 the Nobel Prize winner Kenneth Arrow, who happens to be the uncle of President Obama’s top economic adviser, Larry Summers, wrote a paper for the American Economic Review titled “Uncertainty and the Economics of Medical Care.” Arrow argued strenuously for vastly increasing health insurance, even if the government had to supply some of it, but he also recognized the dangers of moral hazard that health insurance causes. The ideal case for insurance, Arrow wrote, is “that the event against which insurance is taken be out of the control of the individual” who is insured—like, say, insurance against damage from a meteor crashing into your house. In health care, people who are insured (especially if the premiums are being paid by their employers) have greater incentives to risk their health by, for example, smoking than they would if they had to pay the bill for their lung-cancer treatment themselves.1 Insurers struggle to mitigate moral hazard by making people pay part of the cost of their care or, in the case of life insurance, by raising premiums on smokers or denying coverage to skydivers.
The most dangerous kind of moral hazard is produced not from explicit insurance policies (on which, after all, the insurer can raise premiums) but from implicit ones. If your teenager thinks you will bail him out of jail or fix it with the judge if he gets arrested, then he will be more apt to drive drunk. More broadly, in the jargon of Alcoholics Anonymous, your behavior would be called “enabling.” By rescuing an alcoholic from the consequences of his actions, you are encouraging him to drink because he figures you will rescue him the next time.
Over the past three decades, the world has been awash in just this kind of moral hazard, as governments have become more adept at economic rescue and as practitioners of the art have won praise for seeming to pull the world back from the abyss.
Consider the three government officials—Summers, then deputy secretary of the Treasury; Robert Rubin, his boss at Treasury; and Alan Greenspan, chairman of the Federal Reserve—who stared out from the February 15, 1999, cover of Time, which called them, with no irony, “The Committee to Save the World.” With help from the International Monetary Fund and the World Bank, the three, we were told, had averted global disasters, from the vaporizing of the Thai baht, the default of Russian bonds, and the near collapse of Long-Term Capital Management (a huge Greenwich, Connecticut, hedge fund cofounded by two Nobel economists).
A few years later, the Fed chairman would appear to save the world once more by dramatically cutting U.S. interest rates after 9/11, in the process reinforcing what Wall Street called the Greenspan “put.” Ever since the 1987 stock-market crash, in bad times—the start of the Gulf War of 1990-91, the Mexican credit crisis of 1994, the Asian and Russian blights of the late 1990s, and the popping of the tech bubble in 2000—the Fed has pumped liquidity (that is, available money) into the financial system, saving investors from greater horrors by keeping stock prices buoyant.
As a result, investors began to believe that if they bought stocks, the Fed was offering them an implicit put option, which protected the price of their shares through thick and thin. “There has been a sense that participants in the market may think there is this cushion if things get ugly,” said Michael Prell, a former research director at the Fed, in an article in the Financial Times headlined “Greenspan Put May Be Encouraging Complacency.” The date was December 8, 2000.
Eight years later, the global economy faced a crisis caused by lenders who flamboyantly disregarded the creditworthiness of borrowers. In response, government rescuers instituted policies that made the efforts of Time’sCommittee to Save the World seem quaint. In the committee’s heyday, the Asians got $40 billion in help, the Russians $23 billion, and the -Greenwich hedge-fund guys $4 billion. But in 2008 and 2009, just two programs in one country—America’s Troubled Assets Relief Program (TARP) and its stimulus package—together total more than $1.5 trillion.
Meanwhile, the Fed cut short-term interest rates to zero, and the White House and Congress decided to save the country’s largest insurance company (American International Group, or AIG), its largest and -second-largest automakers (General Motors and Chrysler), its two largest providers of mortgage–financing (Fannie Mae and Freddie Mac), and various gigantic commercial and investment banks and finance companies. The creeping dread is that previous rescue policies had induced the moral hazard that made the 2008-09 crisis not only inevitable but also far worse in scale than the economic disruptions that had come before—and that the 2008-09 rescue will induce a disaster worse still.
The financial historian Charles Kindleberger, in his classic Manias, Panics, and Crashes, had it right more than 30 years ago when he described “the moral hazard that the more interventionist the authorities are with respect to the current crisis, the more intense the next bubble will be, because many of the market participants will believe that their possible losses will be limited by government measures.”
How much moral hazard is sloshing around the world? That can’t be measured, but -Peter L. Bernstein, one of the great historians of economic risk, wrote in June: “The moral hazard imposed on the system in recent months is truly mind-boggling in scale and scope. Across the globe the banks and insurers whose errors of judgment created the bubbles have been bailed out without hesitation.”
Bernstein, who died in June at 91, wrote those words in a short cri de coeur published posthumously by the Harvard Business Review. He argued not just that there is a great deal of moral hazard around but also that, unlike in the past, few people seem worried about it. He was certainly a lonely voice. “I am disturbed,” he wrote, “by the almost complete absence of a dissenting conservative view”—that is, the view that “overprotectedness on the part of government officials . . . only encouraged more reckless risk-taking.”
Policymakers have responded that they had, and have, no choice. “The problem we have is that in a financial crisis, if you let the big firms collapse in a disorderly way, they’ll bring down the whole system,” Ben Bernanke, the current Fed chairman, said in July. “When the elephant falls down, all the grass gets crushed as well.” President Obama’s own favorite metaphor is the burning house:2 The fire could not be allowed simply to burn itself out, because the flames might jump to the next house and the next. So government, at whatever cost, had to douse the flames and stop the contagion—even if its actions would encourage people to build firetraps and smoke cigarettes in bed. Obama’s was a policy very different from what Treasury Secretary Andrew Mellon prescribed (and his president, Herbert Hoover, rejected) for the Great Depression: “Liquidate labor, liquidate stocks, liquidate the farmers, liquidate real estate.”
Liquidate Lehman Brothers? That, it turned out, was OK—but what government would have the courage to let General Motors fail? Or Citigroup? Wily policymakers recognize moral hazard, but like wily health insurers, they think they have a way to mitigate it. In a May 2008 speech, Bernanke said:
Central banks face a tradeoff when deciding to provide extraordinary liquidity support. . . . If market participants come to believe that the Federal Reserve or other central banks will take such measures whenever financial stress develops, financial institutions and their creditors would have less incentive to pursue suitable strategies for managing liquidity risk and more incentive to take such risks.
Lately, of course, the Fed has taken such measures. But in that May 2008 speech, Bernanke told us not to worry, there was an answer:
The problem of moral hazard can perhaps be most effectively addressed by prudential supervision and regulation that ensures that financial institutions manage their liquidity risks effectively in advance of the crisis.
At the time Bernanke spoke, of course, the Dow Jones Industrial Average was floating along at 12,900, and the Fed had just pumped emergency cash into Bear Stearns as part of its forced sale to JPMorgan Chase. There was still widespread faith that the formula of “extraordinary liquidity support” plus “prudential regulation” could overcome any problems caused by moral hazard.
Clearly, the formula didn’t work.
Nor should we have expected it to. For the dirty little secret is that regulation can enhance moral hazard, not dampen it. When people expect regulations to protect them, they lose the incentive to protect themselves.
Can regulators adequately police the 8,400 banks that are insured by the Federal Deposit Insurance Corporation? Of course not. Nor can the Securities and Exchange Commission (SEC) protect investors against fraud. As Arthur Levitt, the longest-serving chairman in the history of the SEC, puts it, “A very skillful criminal can almost always outfox the regulator or the overseer.” When investors rely on regulators, they let their own guards down and are more likely to make mistakes. In rejecting self-reliance in favor of confidence in government, investors disregard actions that will truly protect them—like diversifying their portfolios or just using their own good common sense.
Regulations cannot protect against every contingency. To echo the opening line of Anna Karenina, each period of financial excess is excessive in its own way. Financial regulators are too busy protecting against the last disaster to think about the next one. And businesses, especially those in the financial sector, apply high-priced brainpower to finding ways to evade regulation through means such as the off–balance-sheet partnerships that made it appear as though Enron was not going bankrupt.
Only tens of millions of investors can apply the necessary vigilance for effective deterrence. But if those investors think that they’re being protected by the SEC or the FTC or the new Consumer Financial Protection Agency that Obama wants to create, they grow complacent or, worse, feel they have license to engage in risky behavior themselves: If the FDIC is insuring my bank deposits, I can afford to take a flier in penny gold-mining stocks.
The economist Sam Peltzman of the University of Chicago first recognized this phenomenon in 1975. The “Peltzman Effect” holds that people often react to safety regulations by increasing their risky behavior. A law requiring seat-belt use, for example, leads to an increase in speeding.3 “The greater protection,” wrote Peltzman, “had reduced the price of risky driving . . . by reducing the consequences you could expect if you got into an accident.” So people found their risks somewhere else.
Is regulation futile? Not completely, but if policies increase moral hazard and promote dangerously risky behavior by businesses, you can’t expect regulation to balance the adverse effects. A better approach might be termed organic. We must find ways to make sensible risk aversion second nature, ingrained, reflexive. One antidote would be to increase the personal exposure of financial risk takers. That exposure was greatly reduced starting in 1981—right before the moral-hazard problem began to burgeon with the Latin American “debt bomb” economic crisis—when Wall Street’s investment firms, starting with Salomon Brothers, switched from being organized as partnerships to becoming corporations after a ruling by the New York Stock Exchange made that possible.
In a partnership, the owners are on the hook personally for the firm’s liabilities; in a corporation, the personal holdings of owners are walled off from the risks they take.4 To be sure, the corporation was a great invention. After all, we want entrepreneurs to take risks. Without corporations—and bankruptcy, for that matter—the “animal spirits” that John Maynard Keynes recognized as critical to economic growth would be suppressed. Academic research has found, for instance, that states with the most forgiving bankruptcy laws are home to the most entrepreneurial activity.
But finance is different. The external effects of a bank’s failure are not nearly so terrible as Obama’s fire metaphor makes them seem, but they are far worse than, say, a retailer shutting its doors. Kevin Dowd, an economist who specializes in risk management, wrote recently about the banking crisis in the Cato Journal, “The root problem is limited liability, which allows investors and executives the full upside benefit of their risk-taking, while limiting their downside exposure.” Dowd quoted Adam Smith’s warning about corporations in The Wealth of Nations:
The directors of such companies . . . being the managers of other people’s money than their own, it cannot well be expected that they should watch over it with the same anxious vigilance. . . . Negligence . . . must always prevail, more or less, in the management of such a company.
If the managers of Bear Stearns had been required to be owners, and if they had faced losing their bank accounts, their cars, and their second homes, then they would have been far less likely to take enormous risks in, for instance, subprime mortgage securities.
Another example of the organic approach would be to reduce, rather than increase, government’s role in protecting consumers. Consider federal deposit insurance, which was instituted in 1934 to prevent runs on banks. Originally, deposits were insured up to $10,000; today the limit is $250,000. In practice, as Kindleberger points out, the federal government protects all depositors in insured banks. The effect, he writes, is that insurance “encouraged banks to make riskier loans since they were confident that they were protected against runs—if these loans proved profitable, the owners of the banks would benefit.” If the government cut the limit on insurance to, say, $20,000, that single act would send a strong signal to consumers (put your money in a strong bank rather than a weak one) and to bankers (shore up your balance sheet or you won’t get deposits). That this is not likely to happen, to put it mildly, has nothing to do with whether it should happen.
The best way to dampen moral hazard, however, is for politicians and regulators to resist the urge to act, act, act. They do have to maintain confidence in financial markets, but that confidence is being undermined by repeated economic crises that are caused, in large part, by the expectations raised by the interventions themselves. Never have expectations been raised so high. “Bad as the increased debt and subversion of the Fed may be,” Bernstein wrote, “their impact on our economic well-being pales in comparison with what could happen if the bailouts lessen our aversion to risk.”
At the very least, policymakers need to take moral hazard—in all its permutations—into their calculations. They did so in the past. Bernstein refers to the “hue and cry” that used to arise when “governments took steps to cushion the adverse consequences of bubbles for particular companies or sectors of the economy.” Today alarms are being raised about cost. But about the impact on behavior there is barely a peep. In fact, there is blithe indifference to the future effects of today’s policies and a kind of smug congratulatory air redolent of that 1999 Time magazine cover celebrating the Committee to Save the World. “The fire is out now,” said President Obama in July, reverting to his economic metaphor. Perhaps. But the embers are still glowing, waiting for the next gust of wind—or dose of kerosene—to ignite them.
1 An academic paper published by the National Bureau of Economic Research in July asked the question “Does Health Insurance Make You Fat?” The authors, economists Jay Bhattacharya and Kate Bundorf of Stanford and two colleagues, found in the affirmative: “Our estimates suggest that, by insulating people from the costs of obesity-related medical care expenditures, insurance coverage creates moral hazard in behaviors related to body weight. These effects are larger in public insurance programs where premiums are not risk adjusted and smaller in private insurance markets where [the] obese might pay for incremental medical care costs in the form of lower wages.”
2 Fire metaphors have long been popular in discussions of financial crises. Unlike Obama, Thomas Joplin, in a contemporary letter about the British panic of 1825, argued that “the fire can be left to burn itself out” (“Case for Parliamentary Inquiry into the Circumstances of the Panic,” cited by Kindleberger).
3 Stephen J. Dubner, co-author of Freakonomics, wrote of the Peltzman Effect in the New York Times: “My favorite version of this theory is what I call the Lipitor Effect: if your daily diet includes 20 mg of the anti-cholesterol drug Lipitor, it can also include a pastrami sandwich.”
4 I first laid out this idea in “Bankers Need More Skin in the Game,” co-authored with William T. Nolan, in the Wall Street Journal, Feb. 25, 2009. The op-ed cited Brown Brothers Harriman as the only major investment bank that remains a partnership and argued that it was no coincidence that the firm had avoided the excesses of its corporate peers.