The last two decades have been a period of considerable expansion of judicial responsibility in the United States. What the…
The last two decades have been a period of considerable expansion of judicial responsibility in the United States. Although the kinds of cases judges have long handled still occupy most of their time, the scope of judicial business has broadened. The result has been involvement of courts in decisions that would earlier have been thought unfit for adjudication. Judicial activity has extended to welfare administration, prison administration, and mental-hospital administration, to education policy and employment policy, to road building and bridge building, to automotive-safety standards, and to natural-resource management.
In just the past few years, courts have struck down laws requiring a period of in-state residence as a condition of eligibility for welfare. They have invalidated presumptions of child support arising from the presence in the home of a “substitute father.” Federal district courts have laid down elaborate standards for food handling, hospital operations, recreation facilities, inmate employment and education, sanitation, laundry, painting, lighting, plumbing, and renovation in some prisons; they have ordered other prisons closed. Courts have established equally comprehensive programs of care and treatment for the mentally ill confined in hospitals. They have ordered the equalization of school expenditures on teachers’ salaries, established hearing procedures for public-school discipline cases, decided that bilingual education must be provided for Mexican-American children, and suspended the use by school boards of the National Teacher Examination and of comparable tests for school supervisors. They have eliminated a high-school diploma as a requirement for a fireman’s job. They have enjoined the construction of roads and bridges on environmental grounds and suspended performance requirements for automobile tires and air bags. They have told the Farmers Home Administration to restore a disaster-loan program, the Forest Service to stop the clear-cutting of timber, and the Corps of Engineers to maintain the nation’s non-navigable waterways. They have been, to put it mildly, very busy, laboring in unfamiliar territory.
What the judges have been doing is new in a special sense. Although no single feature of most of this litigation constitutes an abrupt departure, the aggregate of features distinguishes it sharply from the traditional exercise of the judicial function.
First of all, many wholly new areas of adjudication have been opened up. There was, for all practical purposes, no previous judge-made law of housing or welfare rights, for example. To some extent, the new areas of activity respond to invitations from Congress or, to a much lesser extent, from state legislatures. Sometimes these take the form of judicial-review provisions written into new legislation. Sometimes they take the form of new legislation so broad, so vague, so indeterminate, as to pass the problem to the courts. They then have to deal with the inevitable litigation to determine the “intent of Congress,” which, in such statutes, is of course nonexistent.
If some such developments result from legislative or even bureaucratic activity (interpretation of regulations, for example), then it is natural to see the expansion of judicial activity as a mere concomitant of the growth of the welfare state. As governmental activity in general expands, so will judicial activity.
But that is not all that is involved. Much judicial activity has occurred quite independently of Congress and the bureaucracy, and sometimes quite contrary to their announced policies. The very idea is sometimes to handle a problem unsatisfactorily resolved by another branch of government. In areas far from traditional development by case law, indeed in areas often covered densely by statutes and regulations, the courts have now seized the initiative in lawmaking. In such areas, the conventional formulation of the judicial role has it that courts are to “legislate” only interstitially. With the important exception of judicial decisions holding legislative or executive action unconstitutional, this conventional formulation of what used to be the judicial role is probably not far from what judges did in fact do. It is no longer an adequate formulation.
What the courts demand in such cases, by way of remedy, also tends to be different. Even building programs have been ordered by courts, and the character of some judicial decrees has made them, de facto, exercises of the appropriation power. A district court order rendered in Alabama had the effect of raising the state’s annual expenditure on mental institutions from $14 million before suit was filed in 1971 to $58 million in 1973, a year after the decree was rendered. Decisions expanding welfare eligibility or ordering special education for disturbed, retarded, or hyperactive pupils have had similar budgetary effects. “For example, it is estimated that federal court decisions striking down various state restrictions on welfare payments, like residency requirements, made an additional 100,000 people eligible for assistance.”1 It is no longer even approximately accurate to say that courts exercise only a veto. What is asked and what is awarded is often the doing of something. not just the stopping of something.
To be sure, courts have always had some say in the way public funds were spent. How else could they award damages against the government? But even in the aggregate, decisions ordering a municipality to pay for an injury sustained by someone who trips over a loose manhole cover are not generally important enough to influence the setting of public priorities. The recent decisions that require spending to achieve compliance with a newly articulated policy are something else again.
It is also true that both affirmative and negative relief (orders to do something and orders to stop doing something) have a long history in English equity jurisprudence. The hoary remedies of mandamus and specific performance both require affirmative action—but action of a very circumscribed, precise sort, the limits of which are known in advance of the decree. Mandamus traditionally compels performance of an official duty of a clear and usually trivial sort; generally, compliance is measured by performance of one or two simple acts. Specific performance compels compliance with certain kinds of contractual obligation, the exact nature of the obligation spelled out in the contract. But specific performance is not traditionally awarded to compel performance of a contract for personal services, one significant reason being that the courts would then find themselves deep in the management of a continuing relationship, perhaps a whole business enterprise.
Again, therefore, compelling the performance of certain affirmative acts is nothing new in principle, but it is new in degree. The decree of a federal district judge ordering mental hospitals to adhere to some eighty-four minimum standards of care and treatment represents an extreme in specificity, but it is representative of the trend toward demanding performance that cannot be measured in one or two simple acts but in a whole course of conduct, performance that tends to be open-ended in time and even in the identity of the parties to whom the performance will be owed. Remedies like these are reminiscent of the kinds of programs adopted by legislatures and executives. If they are to be translated into action, remedies of this kind often require the same kinds of supervision as other government programs do.
This leads to still another difference in degree between adjudication as it once was and as it now is. Litigation is now more explicitly problem-solving than grievance-answering. The individual litigant, though still necessary, has tended to fade a bit into the background. Courts sometimes take off from the individual cases before them to the more general problem the cases call up, and indeed they may assume—dubiously—that the litigants before them typify the problem.
Once again, of course, it is all too easy to fabricate an idealized judicial past that consigned judges merely to resolving individual disputes. It has not been that way. In articulating the law of negligence from one case to the next, judges have tried to lay down a standard of care calculated to reduce the incidence of personal injury and property damage without unduly raising the expense of doing so. Many other common-law rules could be described in similar terms, as much efforts to frame behavioral standards as to apply them. Some of the most formidable difficulties faced by common-law judges have arisen in cases that present the judges with an inescapable choice between doing justice in the individual case and doing justice in general.
For all that, however, the individual and his case remained indispensable. Courts paid particular attention to the interplay between the facts of the individual case and the facts of the class of cases they projected from it. Without the particular case, the task of framing standards was devoid of meaning. It is inconceivable, for example, that even a great, innovative common-law court like the New York Court of Appeals early in this century would have countenanced deciding a case that had become moot. That some issues might forever escape judicial scrutiny because of the doctrine that a moot case is not a case at all would have struck even bold judges of a few decades ago as entirely natural.
Today it is repellent to many judges. For the view has gained ground that the judicial power is, by and large, coterminous with the governmental power. One test of this is the withering of the mootness doctrine in the federal courts. The old prohibition on the decision of moot cases is now so riddled with exceptions that it is almost a matter of discretion whether to hear a moot case. The argument for deciding a case that has become moot is often the distinctly recent one that there is a public interest in the judicial resolution of important issues. In contrast, the earlier view was that there was a public interest in avoiding litigation. By the same token, dismissal for mootness has become a practice reserved for invocation when it is unimportant, inconvenient, or impolitic to decide the issues a case raises.
What this shift signifies is the increasing subordination of the individual case in judicial policymaking, as well as the expansion of judicial responsibility more nearly to overlap the responsibilities of other governmental institutions. The individual case and its peculiar facts have on occasion become mere vehicles for an exposition of more general policy problems. Consequently, somewhat less care can be devoted, by lawyers and judges alike, to the appropriateness of particular plaintiffs and to the details of their grievances.
At the same time, the courts have tended to move from the byways onto the highways of policy-making. Alexander M. Bickel has captured, albeit with hyperbole, the thrust of the new judicial ventures into social policy. “All too many federal judges,” he has written, “have been induced to view themselves as holding roving commissions as problem solvers, and as charged with a duty to act when majoritarian institutions do not.” The hyperbole is itself significant: many federal judges regard themselves as holding no such commission, yet even they have embarked on “problem-solving” ventures. This is the surest sign that the tendency is not idosyncratic but systemic: it transcends, in some measure, individual judicial preference and calls for systematic explanation.
The remote sources of the broad sweep of judicial power in America lie deep in English and American political history. The immediate origins of recent shifts in judicial emphasis are another matter. These are several, and they have tended to build on each other.
Most obvious has been the influence of the school-desegregation cases. These decisions created a magnetic field around the courts, attracting litigation in areas where judicial intervention had earlier seemed implausible. The more general judicial activism of the Warren Court signaled its willingness to test the conventional boundaries of judicial action. As this happened, significant social groups thwarted in achieving their goals in other forums turned to adjudication as a more promising course. Some organizations saw the opportunity to use litigation as a weapon in political struggles carried on elsewhere. The National Welfare Rights Organization, for example, is said to have turned to lawsuits to help create a state and local welfare crisis that might bring about a federal guaranteed income. The image of courts willing to “take the heat” was attractive, too, to legislators who were not. Such social programs as the poverty program had legal-assistance components, which Congress obligingly provided, perhaps partly because they placed the onus for resolving social problems on the courts. Soon there were also privately funded lawyers functioning in the environmental, mental-health, welfare-rights, civil-rights, and similar fields. They tended to prefer the judicial road to reform over the legislative. They raised issues never before tested in litigation, and the courts frequently responded by expanding the boundaries of judicial activity.
Major doctrinal developments both followed and contributed to the increase in number and the change in character of the issues being litigated. The loosening of requirements of jurisdiction, standing, and ripeness (to name just three) helped spread judge-made law out, moving it from the tangential questions to the great principles. If these doctrinal decisions mean anything, it is that the adjudicative format is less and less an inhibition on judicial action and that the lawsuit can increasingly be thought of as an option more or less interchangeable with options in other forums, except that it has advantages the other forums lack. Hence the time-honored tradition that those who lose in the legislature or the bureaucracy may turn to the courts has lost none of its appeal. Only the identity of those who turn from one forum to another has undergone some change. Deprived social groups have joined the advantaged in the march to the courthouse. But where the wealthy invariably want the courts to strike down action the other branches have taken, the disadvantaged often ask the courts to take action the other branches have decided not to take. The character of the demand for action is therefore different.
Some major obstacles of a practical sort have also been cleared away. It still takes years to conclude most litigation, but some courts have shown a willingness to expedite hearing schedules to speed up the disposition of injunction cases. It still costs large sums of money to bring suit, and reductions in foundation funding of public-interest law firms are contracting their efforts. But decisions awarding attorneys’ fees may have eased the hardship and proliferated the cases. In one decision, an attorney’s fee was awarded even though the plaintiff’s lawyer had agreed to represent him without charge. The court said the award would encourage lawyers to represent public-interest clients without fees in the hope that a fee would be awarded—and no doubt it would have that effect. In 1975, the Supreme Court restricted the award of attorneys’ fees in federal cases unless authorized by Congress. But many statutes do allow attorneys’ fees, and proposed legislation in some areas may be more generous in allowing expert witness fees as well.
It is still true, too, that legislative remedies—when they are forthcoming—may be more systematic and inclusive than judicial remedies. Yet more often than not, the judicial remedy has a directness, a concreteness, and a lack of equivocation notably absent in schemes that emerge from the political process. More and more, the courts have turned to decrees that afford comprehensive relief, often of a far-reaching sort. Support in the Anglo-American equity tradition for a decree as broad as the occasion warrants is unmistakable. The problem of school desegregation has tested this tradition, and the judges have sometimes proved as resourceful as the English chancellors from whom their equitable powers spring. In the process, the willingness to entertain remedies as inclusive as those a legislature might provide has grown, even to the point where the judicially ordered consolidation of school districts or equalization of tax burdens across districts—wholly beyond imagination not long ago—have become debatable measures, indeed litigated issues.
All of this has taken place, perhaps could only take place, in a society given to an incomparable degree of legalism. In the United States, as Tocqueville observed long ago, “all parties are obliged to borrow, in their daily controversies, the ideas, and even the language, peculiar to judicial proceedings. . . . The language of the law thus becomes in some measure a vulgar tongue; the spirit of the law, which is produced in the schools and courts of justice, gradually penetrates beyond their walls into the bosom of society, where it descends to the lowest classes, so that at last the whole people contract the habits and the taste of the judicial magistrate.” The American proclivity to think of social problems in legal terms and to judicialize everything from wage claims to community conflicts and the allocation of airline routes makes it only natural to accord judges a major share in the making of social policy. No doubt, this underlying premise in American thought was a necessary, though insufficient, condition for the expansion of judicial responsibility that has taken place over the last twenty years.
The tendency to commit the resolution of social-policy issues to the courts is not likely to be arrested in the near future. The nature of the forces undergirding the tendency makes them not readily reversible. Doctrinal erosion in particular is not easily stopped. Ironically, perhaps, the traditional judicial conception of precedent makes it more difficult for courts to change course dramatically than for the other branches of government to do so. The generally greater stability of judicial personnel, appointed for life in the federal courts, appointed or elected to long terms in the states, also makes for continuity. The statutes already enacted and continuing to be enacted, lodging authority for policy-making in the courts by explicit provision or by default, are enough to propel judicial activity for some time to come. And the attractiveness of passing problems to the judges is unabated. The new responsibilities of the courts are not just the product of individual states of mind.
Even to the limited extent that judges can contract their recently expanded commitments, some curious twists are possible. Supreme Court Justices, appointed because a President thinks they will construe the Bill of Rights more narrowly than their predecessors, have a way of becoming entangled in institutional tradition. No contraction is likely to take place on all fronts.
Beyond that, expansive exercises in statutory construction may be untouched by a contraction in constitutional adjudication. As a matter of fact, judges who recoil at innovation in constitutional lawmaking may not see the same dangers at all in the interpretation of statutes. It is customary to think that judicial self-limitation in constitutional interpretation is important because constitutional law is a permanent inhibition on policy: a legislature cannot override an interpretation of the Constitution. In statutory law, judges may reason, basic policy choices have been made by other branches, and judicial construction may later be overridden by them. For these reasons, judges with a strong sense of institutional limitation are less likely to let it stand in the way of innovation short of constitutional interpretation.
The soundness of this conventional distinction between the scope of constitutional and non-constitutional adjudication need not detain us here. Whether or not decisions that rest on constitutional foundations are really more permanent than those that do not is beside the point—which is simply that traditional counsels of restraint that apply to the former do not apply to the latter. Judges concerned to avoid the excesses that are believed to have characterized the Supreme Court of the 1930’s and 1960’s may still embark on ambitious ventures of judicial reform in the name of statutory construction.
Let me give two examples. Both, as it happens, are from the civil-rights field, and both are decisions of the Supreme Court, but they might just as easily have come from some other court acting in some other field.
The first is Griggs v. Duke Power Co., in which the Court unanimously read Title VII of the Civil Rights Act of 1964 to require the elimination of tests and diplomas as job requirements if they disqualify prospective black employees at a higher rate than whites, unless the employer can show that the test or diploma bears a “demonstrable relationship” to successful job performance. Employment practices that have no discriminatory intent, the Court said, are to be measured by “business necessity.” Tests that prevent minority applicants from obtaining jobs in proportion to their numbers must be shown to “measure the person for the job and not the person in the abstract.” So the Court, in an opinion by Chief Justice Burger, interpreted the Civil Rights Act.
The act had forbidden job discrimination on grounds of “race, color, religion, sex, or national origin.” An amendment, added on the floor of the Senate in order to clarify and reaffirm the rights of employers to use “ability tests” to screen potential employees, had insulated such tests from scrutiny, provided the tests were “not designed, intended, or used to discriminate” on racial or other forbidden grounds. That, at least, is what the Senators thought.
In the event, the Supreme Court read the word “used” to mean used intentionally or unintentionally, and so to forbid tests like the aptitude test used in Griggs unless the employer could prove it measured job performance in the narrow sense.
There is convincing legislative history to show that Congress intended the opposite of the result reached in Griggs. This was a provision on which Congress was at pains to make itself clear. The sponsors of the testing amendment explicitly wished to insure that no court would prohibit a testing procedure to evaluate applicants because some categories of applicants might do better on the test than others. The Senators plainly believed that general intelligence and aptitude tests were job-related, and meant to exempt them from the act for that reason. It is not surprising, therefore, that the Court’s handling of the legislative history is halting and embarrassed.
Griggs has already been interpreted very broadly by lower federal courts to require a number of significant changes in public and private employment practices. There have been suggestions that Grigg’s requirement of expensive validation of employment tests may cause firms to abandon testing and move to more subjective (and potentially more biased) methods of screening applicants—or, on the other hand, to proportional hiring on a racial basis in order to avoid charges of discrimination. Either way, the result contravenes the twofold legislative intention, which was to forbid preferential hiring on a racial basis and “to allow an employer’s bona fide use of professionally developed tests despite their disparate impact on culturally disadvantaged minorities.”
In its skepticism about aptitude tests and formal credentials, Griggs accords with some recent research questioning the predictive value of such requirements for job performance. But Griggs cannot be understood as a traditional exercise in statutory exegesis.2
The second example is Lau v. Nichols. Lau arose under Title VI of the Civil Rights Act of 1964, which forbids discrimination on grounds of “race, color, or national origin” in “any program or activity receiving federal financial assistance.” As a condition of federal aid, the Department of Health, Education, and Welfare had required school districts to “take affirmative steps to rectify the language deficiency” of “national-origin minority group children” unable to speak and understand English. So enlarged by the regulations, the act was read by the Supreme Court, 9-0, to require the San Francisco school system to take some action to rectify the language deficiencies of some 1,800 Chinese-American pupils (most of them born in the United States) who do not speak English.
The decision has already had important consequences across the nation. Indeed, it is one of the ironies of Lau that, though it was rendered in the case of Chinese-origin pupils, it will affect primarily the rights of Spanish-speaking children all over the United States. The decision has been widely interpreted—erroneously—as requiring “bilingual-bicultural” education, rather than remedial instruction in English. Stimulated by Lau, the Texas legislature has recently enacted a law making bilingual education mandatory in schools with twenty or more children whose ability in English is limited, and there are movements for bilingual education in several other states. The decision will no doubt affect instructional programs and language choice across the nation.
Lau may be interpreted as a judicial experiment. The Court did not hold that any particular action was required of the school system—only that inaction was forbidden. The case was remanded to the district court for the fashioning of relief, and the Court may have thought there would be time enough for a second look after a decree was entered.
This may explain the offhanded way in which the case was disposed of, for the majority and concurring opinions contain no serious discussion of the central issues of the litigation: whether it constitutes discrimination to teach all pupils in English regardless of their fluency at the time they enter school, and whether, in any event, linguistic discrimination constitutes discrimination on the basis of “race, color, or national origin.” Discrimination is generally thought of as differential treatment. What was complained of in Lau was the failure to differentiate in the instruction given pupils fluent in Chinese and not fluent in English. Similarly, linguistic loyalty and national origin are related but are not the same thing. No claim was made in Lau that Chinese-origin pupils were affected adversely by the policies of the school system—only that monolingual, Chinese-speaking pupils (most of whom had their “national origin” in the United States) were so affected. There is little in the language of Title VI that suggests it contemplates affirmative action to remedy linguistic deficiencies, and there is nothing in the legislative history that hints at such a purpose.
These two seminal decisions of the Burger Court—neither of which drew a single dissent—should suffice to show that judicial participation in the making of social policy is no ephemeral development. It is part of a chain of developments that can survive the vicissitudes of constitutional “activism” and “restraint.” No one could mistake these decisions for “interstitial” statutory interpretation, since they are departures from the language and legislative history of the statutes they construe. Nor does it need to be stressed that these decisions were not solely the work of judges who “view themselves as holding roving commissions as problem solvers.” That they do not so regard themselves attests to the structural and enduring character of the phenomena I have been describing.
The appropriate scope of judicial power in the American system of government has periodically been debated, often intensely. For the most part, what has been challenged has been the power to declare legislative and executive action unconstitutional. Accordingly, the debate has been cast in terms of legitimacy. A polity accustomed to question unchecked power views with unease judicial authority to strike down laws enacted by democratically elected legislatures. Where, after all, is the accountability of life-tenured judges? This question of democratic theory has been raised insistently, especially in times of constitutional crisis, notably in the 1930’s and again in the 1950’s.
The last word has not been heard in these debates, and it will not soon be heard. The structure of American government guarantees the issue a long life. But, for the moment, the debate seems to have waned with the growing recognition that there are elements of overstatement in the case against judicial review. The courts are more democratically accountable, through a variety of formal and informal mechanisms, than they have been accused of being. Equally important, the other branches are in many ways less democratically accountable than they in turn were said to be by those who emphasized the special disabilities under which judges labor. Hence the many academic discussions of the need for “representative bureaucracy,” for a less insular Presidency, and for reform of the procedures and devices that make Congress undemocratic internally and unrepresentative externally. (That students of any single institution often tend to see that institution as the flawed one is a useful indication of the limited perspective that comes from single-minded attention to any one institution. It should properly make us chary of drawing inferences about the courts without an institutionally comparative frame of reference.)
As the debate over the democratic character of judicial review wanes, there is another set of issues in the offing. It relates not to legitimacy but to capacity, not to whether the courts should perform certain tasks but to whether they can perform them competently.
Of course, legitimacy and capacity are related. A court wholly without capacity may forfeit its claim to legitimacy. A court wholly without legitimacy will soon suffer from diminished capacity. The cases for and against judicial review have always rested in part on assessments of judicial capacity: on the one hand, the presumably superior ability of the courts “to build up a body of coherent and intelligible constitutional principle”; on the other, the presumably inferior ability of courts to make the political judgments on which exercises of the power of judicial review so often turn. If the separation of powers reflects a division of labor according to expertise, then relative institutional capacity becomes relevant to defining spheres of power and particular exercises of power.
The recent developments that I have described necessarily raise the previously subsidiary issue of capacity to a more prominent place. Although the assumption of new responsibilities can, as I have observed, be traced to exercises of the traditional power to declare laws unconstitutional, they now transcend that power. Traditional judicial review meant forbidding action, saying “no” to the other branches. Now the judicial function often means requiring action, and there is a difference between foreclosing an alternative and choosing one, between constraining and commanding. Among other things, it is this difference, and the problematic character of judicial resources to manage the task of commanding, that make the question of capacity so important.
Yet judicial intervention in matters of social policy has greatly increased and will not soon decrease. This expansion of judicial responsibility means, first, a broadening of the sphere of judge-made law, into areas that might once have been called “social welfare” and were not considered “legal” at all. It also means an expansion of the scope for judicial initiative within these areas. Courts are no longer as confined to the interstices of legislation as they once were—now the statute is often a mere point of departure—and they are no longer as inhibited as they once were from delving into supervisory or administrative responsibilities in connection with the remedies they award. They are more often found requiring detailed, affirmative, and specific action than previously. They are less constrained, too, by the limitations of the cases and the litigants before them. More openly, self-consciously, and broadly than before, the courts are engaged in efforts to shape or control the behavior of identifiable social groups, groups not necessarily before the court: welfare administrators, employers, school officials, policemen.
What this means is that there is somewhat less institutional differentiation today than two decades ago. There is now more overlap between the courts and Congress in formulating policy and between the courts and the executive in both formulating and carrying out programs. That is, the types of decisions being made by the various institutions—their scope and level of generality—seem to be converging somewhat, though the processes by which the decisions are made and the outcomes of those processes may be quite different—as different as the groups who maneuver to place an issue before one set of decision-makers rather than another, or who, defeated in one forum, turn hopefully to the next, believe them to be. Thus, to say that there is convergence in the business of courts and other institutions is not tantamount to saying that it makes no difference who decides a question. On the contrary, it matters a good deal, for the institutions are differently composed and organized. The real possibility of overlapping responsibilities but opposite outcomes makes the policy process a more complex and drawn-out affair than it once was.
The recency, the incompleteness, and the incremental history of these developments should not obscure their portentousness. It is just possible that these modifications in the scope of judicial power will one day amount to a major structural change. We regard as quaintly and unduly restrictive the medieval conception of legislation as mere restatement of customary law. Future generations may likewise view our distinctive association of adjudication with the grievances of individual litigants as an equally curious affectation.
It may be, of course, that something much less significant than this is in the offing. For the purposes of this discussion, it makes little difference. The changes of degree that are already visible are quite enough to raise important questions about the consequences of using the judicial process for the resolution of social-policy issues.
Many of the most serious questions relate to the way in which courts get their information. Consider the position of the judge. His formal function is to decide a dispute between two parties, a role for which his training and experience generally equip him well. The legal rules and machinery through which the judge works are also geared to the controversy between the parties. Virtually all of the conventions of litigation leave the initiative to the parties. What they elicit, the judge hears. What they neglect, he neglects. The rules of evidence are also designed with the litigants’ case, and that alone, in mind. Evidence about their relationship, their characteristics, their transactions, is generally relevant and admissible. Evidence about more general conditions is often inadmissible and, when admissible, is treated far more circumspectly, both by the law and by the judges who apply it.
Most of the time these rules and conventions are well adapted to the business of the courts. Focusing on litigants and their controversy is perfectly appropriate when their own controversy is all that is at stake. But when questions of social policy arise in the guise of a lawsuit, that is another matter. Then the controversy between the litigants is really secondary to the larger questions that their lawsuit raises. The judge can learn all there is to learn about the parties and their dispute without being very much wiser about the general problem their case is said to reflect.
As a matter of fact, the judge may be seriously misled if he pays close attention to the facts of the case before him, for that case is almost surely unrepresentative of the general class of cases it is supposed to represent. Enough is known about when and why people bring suit to know that the average plaintiff is not just like everybody else with a similar problem. No lawyer who seeks a favorable decision in welfare rights or prison reform or any other field will, if given a choice, be content to bring just a run-of-the-mill case to court. Instead, he will choose the worst case, the most extreme case, that comes his way.
So the cases that come to court are by no means typical, and a judge who masters the facts of the case before him has at best a very rough—and sometimes stereotyped—idea of the dimensions of his policy problem. How the courts are to inform themselves of the diverse social conditions that are increasingly relevant to decision is a question that insistently demands an answer.
The judge’s difficulties, however, only begin there. After he is convinced that some unlawfulness has been identified, a remedy has to be devised. At this point, the judge, like all other policy-makers, has gone into the prediction business. He has to sense what is required to get the results he aims at, and he must forecast the consequences of various alternative decrees that might be formulated.
Yet the same rules, conventions, and procedure that focus the judge’s attention on the litigants and the history of their controversy also focus the judge on the past more than the future. Since court decisions declare rights and duties arising out of previous transactions, the framing of the decree usually gets far less meticulous attention than does proof of the “wrongs” which give rise to the decree. Litigation tends to be backward-looking, and orienting it toward forecasting is no simple matter.
Questions also remain about the way in which court orders are carried out. Accustomed to thinking in terms of “compliance” or “noncompliance,” judges do not necessarily sense the scope that exists for effectuating a judicial decree in one way rather than another. Yet what happens to a decree after it leaves the courthouse is every bit as important as what has gone before.
In fact, the accessibility and rationality of the judicial process may lead the participants to think that the problem of implementation is more straightforward than it is. One reason the policy issue is in court at all may be that action on it has been thwarted in the other branches of government by the myriad influences and interests that are represented there. In court, fewer interests are represented, and fewer still are in a position to thwart action. Hence the attractiveness of the judicial forum for groups that find it hard to get their way in the other branches.
The judicial process thus reduces the number of participants and makes it possible to cut through to an apparent solution. But the courts cannot make the complex pattern of interests disappear altogether. If all the parties who have some stake in, a policy decision one way or the other are not fully represented in court, they may nonetheless reappear and make their influence felt at the implementation stage. And so the judge who decrees this or orders that may later find that he has in fact produced something rather different from what he had in mind. The simplification of social and political complexity that occurs in the courtroom is only temporary.
After a number of such experiences, sensitive judges may well wonder whether the institution over which they preside, admirably suited as it is to processing individual cases, is really the right setting in which to thrash out the perplexing social-policy questions that increasingly come to court. Some judges may begin to think of ways to augment their capacity; others may prefer to emphasize the venerable canons of judicial restraint. And some may ultimately come to embrace Jeremy Bentham’s blunt assertation that “amendment from the judgment seat is confusion.”
1 Stuart Scheingold, The Politics of Rights: Lawyers, Public Policy, and Political Change, Yale University Press, 1974, p. 126.
2 It is interesting to note that the Court has recently declined to extend the rigorous Griggs standards for employment tests to cases arising under the Constitution rather than under Title VII. This may suggest that the majority is unhappy with what the lower courts have done with Griggs, and wishes to confine the impact of the decision. But it is also consistent with the thesis that some Justices are willing to be bolder in statutory interpretation than in constitutional interpretation.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Are the Courts Going Too Far?
Must-Reads from Magazine
A monstrous regime's rational statecraft
ne of the more improbable geostrategic surprises of recent years has been the revival of the North Korean economy under the direction of Kim Jong Un. Just to be clear, that economy remains pitiably decrepit, horribly distorted, and desperately dependent on outside support. Recent estimates suggest that its annual merchandise exports do not reach even 1 percent of the level generated by its nemesis, South Korea. Even so, the economic comeback on Kim Jong Un’s watch has been sufficiently strong to permit a dramatic ramp-up in the tempo of his nation’s race to amass a credible nuclear arsenal and develop functional intercontinental ballistic missiles capable of striking the U.S. mainland. That is, of course, the express and stated objective of the program. Pyongyang today appears to be perilously close to achieving its aim—much closer now, indeed, than complacent Western intelligence assessments had presumed would be possible by this juncture. But then, North Korea is full of surprises for foreign observers.
The difficulty with analyzing the country’s weaknesses and strengths comes from the fact that the North Korean system—which is made up of the Kim dynasty, the North Korean state, and the economy constructed to maintain them both—is unlike any other on earth. By now, its brand of totalitarianism (“our own style of Socialism,” as Pyongyang calls it) is sufficiently distinctive that children of the Soviet or Maoist tradition also commonly find themselves at a loss to apprehend its logic and rhythms.
North Korea is no longer even a Communist state, if that term is to have any meaning. The once-prominent statues of Marx and Lenin in Kim Il Sung Square were removed some years ago. Mention of Marxism-Leninism has reportedly been excised from the updated but still currently unpublished Charter of the Korean Workers’ Party. The 2016 version of its constitution excises all references to Communism, extolling instead only the goal of “socialism”—and its two “geniuses of ideology and theory,” Kim Il Sung and Kim Jong Il (the grandfather and father of the current dictator). Small wonder that the world routinely misjudges—and very often, underestimates—the North Korean state and its capabilities.1
Despite its suffocating ideology, for example, North Korea is capable of highly pragmatic adaptation and economic innovation. Notwithstanding its proclaimed “self-reliance” and its seeming isolation, it is constantly finding new sources of foreign cash through ingenious and often remarkably entrepreneurial schemes overseas. And despite all the international sanctions, Kim Jong Un really has overseen a North Korean economic upswing of sorts since assuming power in 2011, the signal fact that best helps explain the acceleration in Pyongyang’s push for a credible nuclear and ballistic arsenal. Thanks to these and other apparent paradoxes, an economy seemingly always on the knife edge of disaster somehow manages to stay on course, methodically amassing the military might for what it promises will be an eventual nuclear face-off with the world’s sole superpower.
Though the hour is late, given all the progress that North Korea has been permitted over the past generation, it nevertheless looks as if there may still be time left to prevent Pyongyang from completing and perfecting its nuke and missile projects through “non-kinetic means”—that is to say, through international economic pressure as opposed to military action. For such an approach to work, however, we will need an informed and robust strategy—not the feckless, episodic, and intellectually shoddy interventions we have mainly witnessed up to now.
Indispensable to such a strategy must be an understanding of the North Korean economy—the instrument that makes the North Korean threat possible. In particular, we need to understand 1) how that economy functions, and to what ends; 2) how the “Dear Respected Comrade” Kim Jong Un brought to it a limited but critical measure of economic revival; and 3) how America and others might use the considerable financial and commercial options at their disposal to impair the North Korean regime’s designs, before Pyongyang wins what is now a race against time.
Despite the information blackout that North Korean leadership has striven to enforce for generations, we already know much more about all these things today than the Kim family regime could possibly want—more than enough to begin purposely defanging the North Korean menace.
One: The Economy of Command
Given its longstanding reputation as a basket case, it may startle readers to learn that there was actually a time when North Korea was regarded as a dynamic and rapidly advancing economy. Back in 1965, the eminent British economist Joan Robinson wrote that North Korea’s achievements put “all the economic miracles of postwar development…in the shade.”2
In those days, if Western intellectuals happened to talk about the “Korean miracle,” they weren’t discussing anything going on in the South. And it wasn’t just dreamy academics and well-hosted foreign visitors who seemed to hold North Korea’s economic prospects in high esteem. Between the late 1950s and the early 1960s, Japan witnessed an exodus of ethnic Korean residents—in all, roughly 80,000 people—who packed up and steamed off under their own free will to the North, voting with their feet to join the Korean state they deemed to offer the greater promise.
Despite the devastation North Korea suffered during the war it launched against the South in 1950, and despite the blazing economic takeoff in South Korea that commenced in the early 1960s under the Park Chung-Hee junta, North Korea may have been ahead of South Korea in per capita output for two full decades after the 1953 armistice. A CIA study in the late 1970s, for example, concluded that South Korea did not catch up with North Korea until 1975.3 Contemporaries at South Korea’s Korean Central Intelligence Agency (KCIA) concurred that the North was well ahead of the South on a per capita basis throughout the 1950s and 1960s, though they argued that the South caught up with the North a few years earlier than the CIA believed.
In retrospect, the wonder is that North Korea’s economy worked as well as it did for as long as it did. For from its 1948 founding onward, North Korea was not just another Cold War Soviet-type economy: It was a Stalin-style war economy on steroids.
As fate had it, the Japanese colonial overlords who controlled Korea from 1910 until 1945 constructed a heavy industrial base in its northern half—a forward supply zone to support their own greater Asian war efforts. Unlike the South, the North had major deposits of coal, iron, and other minerals, along with plenty of natural hydropower. “Great Leader” Kim Il Sung—the onetime guerrilla fighter and later Red Army officer who started North Korea’s Kim family dynasty—inherited this infrastructure when he took over the northern part of the divided peninsula in 1945 and used it as a base camp from which he directed an upward climb toward the summit to which he aspired: an economy set on permanent total-war footing.
Kim Il Sung came perilously close to consummating his vision. By the mid-1970s, the Great Leader would observe that “of all the Socialist countries, ours bears the heaviest military burden.”4 Even by comparison with places like the Soviet Union and East Germany, his North Korea was a garrison state. By the late 1980s, this country of barely 20 million was fielding an army of more than 1.2 million—a ratio comparable to America’s in the middle of World War II. Those military-manpower estimates, by the way, are derived not from U.S. or South Korean intelligence, but rather from unpublished population figures Pyongyang transmitted to the UN in 1989 (data that inadvertently revealed the size of the country’s non-civilian male population).5
Today, two Kims later, the International Institute for Strategic Studies reports that North Korea currently maintains the world’s fourth-largest standing army in terms of sheer manpower—ahead of Russia and behind only the globe’s demographic giants (China, India, the United States). For more than half a century—since 1962, the year Kim Il Sung decreed the “all-fortress-ization” of the nation—North Korea has been the most exceptionally and unwaveringly militarized country on the face of the planet.
But why? What possessed North Korean leadership to commit their country, decade after decade, to such an extraordinarily expensive and irrational economic posture? There was a method to this seeming madness. Kim Il Sung’s grand design for unending super-mobilization served many logical purposes, given the first premises of his North Korean state.
Enforcing permanent war-economy discipline comported nicely with perfecting the domestic totalitarian order the Great Leader desired. Further, given the unhappy realities of geography and 20th-century Korean history, having the might to stand up to any and all foreign powers—including his nominal Communist allies in Moscow and Beijing—may also have seemed an imperative. But above all else, North Korea’s immense military economy reflected Kim’s overarching obsession with unifying the divided Korea, and doing so unconditionally—that is to say, to finishing up that Korean War he had started in 1950, and finishing it up on his own terms this time.
In the eyes of North Korea’s rulers, the South Korean state was (and still is) a corrupt, illegitimate, and inherently unstable monstrosity, surviving only because of the American bayonets propping it up. The Great Leader wanted to be able (when the right opening presented itself) to strike a knockout punch against the regime in Seoul and wipe it off the face of the earth—“independent reunification,” in North Korean code language. This he could not do without overwhelming military force—and without an economic system straining constantly to provide that muscle.
As early as 1970, the Great Leader was warning that “the increase in our national defense capability has been obtained at a very great price.”6 And by the late 1980s, Kim Il Sung’s “economic miracle” was all but dead in the water. Decades of crushing military burden and systemic suppression of consumer demand had taken their predictable toll. And North Korean planners had compounded these difficulties with additional unforced errors of their own.
Their idiosyncratic application of the Great Leader’s Juche (“self-reliance”) ideology, for example, included a general injunction against importing new foreign machinery and equipment. This ensured that the country would have to maintain a high-cost, low-productivity industrial infrastructure. Juche also apparently meant never having to pay your foreign debts, whether to fraternal socialist states or to “imperialist” creditors in Western countries foolish enough to lend money to Pyongyang. By the 1980s, global financial markets had caught on to the game, and North Korea found itself almost completely cut off from international capital. And the longstanding “statistical blackout” North Korean leadership enforced to facilitate international strategic deception also inadvertently impaired economic performance by blinding domestic decisionmakers and requiring them to “plan without facts.”
But it was the ending of the Cold War that pushed the North Korean economy out of stagnation, and into disaster. Juche ideology notwithstanding, North Korea had never been self-reliant; sustaining its severely deformed economy required constant inflows of concessionary resources from abroad. Pyongyang was (and remains) consummately imaginative in devising schemes for extracting aid and tribute from overseas. In the 1960s, 1970s, and 1980s, Kim Il Sung procured the equivalent of tens of billions of dollars in support from Beijing, Moscow, and the Kremlin’s Warsaw Pact satellites, expertly playing the Kremlin off against China, gaming aid out of each while aligning with neither.
In 1984, Kim Il Sung made a fateful error: He leaned decisively toward Moscow, a tilt signaled by his unprecedented six-week state visit to the USSR and Eastern Europe that same year. The gamble paid off initially: Between 1985 and 1989, the Kremlin transferred around $7 billion to Pyongyang, twice as much as the amount transferred over the entire previous 25 years, much of it in military matériel. In 1988, North Korea relied on the Soviet bloc not only for almost all its net concessionary foreign-resources transfers, but also for roughly two-thirds of its international trade, most of it arranged on political, highly subsidized, terms.
Then the came the Soviet bloc’s collapse. By 1992—the year after the collapse of the USSR—both trade and aid from the erstwhile Soviet bloc had plummeted by nearly 90 percent. North Korea’s worldwide overall supplies of merchandise from all foreign sources consequently plunged by more than half over those same years.
These sudden devastating shocks sent North Korea’s economy into a catastrophic free fall from which it would not manage to recover for decades. The socialist planning system essentially collapsed. Famine was just around the corner.
Two: A Man-Made Horror and Its Surprising Aftermath
The North Korean famine of the 1990s was a catastrophe of historic proportions. No one outside North Korea’s leadership knows just how many people died in that completely avoidable man-made tragedy, but the toll was certainly in the hundreds of thousands and could possibly have exceeded a million.7 It arguably qualifies as the single worst economic failure of the 20th century. It was the only time in history that people have starved en masse in an urbanized, literate society during peacetime.
It is noteworthy that the famine—usually dated from 1995 to 1998—did not commence until after the death of the Great Leader and the ascension of his son and heir, “Dear Leader” Kim Jong Il. This was no coincidence. Economic failure was the Dear Leader’s stock-in-trade. His political rise almost perfectly corresponds to the decline and fall of the North Korean economy. It happens that the Dear Leader did succeed in what was arguably his primary political objective: to die of natural causes, still safely and securely in power. But economic progress worthy of the name would not be possible in North Korea so long as he was its supreme ruler.
Though both father and son were totalitarian tyrants enthused with their hereditary total-war machine, the differences in their economic inclinations and impulses were nonetheless striking. Dogmatic as he was, the Great Leader still possessed a peasant’s sense of practicality. Proof of his pragmatism is the singular fact that North Korea, alone among all Asian Communist states (and including Russia in this roster), avoided famine during its 1955–57 collectivization of agriculture.
On the other hand, the Dear Leader, from his sheltered Red Palace upbringing onward, was every bit the paranoid, secluded ideologue. He not only disapproved of any concessions to economic pragmatism but feared these as positively counterrevolutionary and potentially lethal to his rule. He likewise regarded ordinary commercial interactions with the world economy as “honey-coated poison” for the North Korean system. At home, he wanted total mobilization but without any material incentives; from abroad, he sought a steady inflow of funds unconstrained by any reciprocal obligations. Kim Jong Il’s preferred economic model, in short, was to enforce Stakhanovite fervor at home through propaganda and terror while financing his war-economy state through military extortion abroad. He called this approach “military-first politics.”
Unwilling as he was to address the country’s newly dire economic circumstances with reforms—in his view, there was nothing to reform—Kim Jong Il’s North Korea was trapped in deepening depression for most of the 1990s. We will know how close the place came to total economic collapse—to the sort of breakdown of the national division of labor that Germany and Japan suffered at the very end of World War II—only when the archives in Pyongyang are finally opened. Throughout the 1990s, in any case, heavy industry was largely shut down, with inescapable consequences for conventional military forces. The death spiral for the war-making sector redoubled the importance to the regime of the nuke and missile programs, both as an insurance policy for regime survival and as the last viable military instruments for forcing the South into capitulation in some future unconditional unification.
In retrospect, it is clear that Pyongyang had no intention of desisting from its quest for nuclear weapons and ballistic missiles, even as it played Washington and her allies for aid for years by pretending its nuclear program might be negotiable. Yet also in retrospect, the slow tempo of nuke and missile development under Kim Jong Il’s rule has to be considered a surprise. Any serious weapons program requires testing to advance—yet Pyongyang managed just one long-range missile launch in the 1990s and only three during his 17-year reign. The Dear Leader also oversaw two nuclear tests before his death in 2011—but only toward the end of his tenure, in the years 2006 and 2009.
Why this hesitant tempo if nukes and missiles were a central priority for the North Korean war economy? Although other possible explanations come to mind, the obvious one has to do with financial and economic constraints. Ironically, despite his vaunted “military-first politics,” North Korea’s nuke and missile programs may also have been inadvertent casualties of Kim Jong Il’s gift for stupendous economic mismanagement. (True, North Korea could undertake expensive nuclear projects internationally, such as the undeclared plutonium reactor in Syria that was nearing completion when the Israelis leveled it in 2007—but that was apparently a cash-and-carry operation, bankrolled by the Dear Leader’s friendly customers in Iran.)
There is considerable evidence that the North Korean economy hit bottom around 1997 or 1998. That bottom was very low indeed: Rough estimates suggest that, by 1998, North Korea’s real per capita commercial merchandise exports were barely a third their level of just a decade earlier, while real per capita imports, including supplies indispensable to the performance of key sectors of the domestic economy, were down by about 75 percent.
North Korea appears to have turned the economic corner not on the strength of new or better domestic economic policies, but rather on breakthroughs in international aid procurement. Pyongyang figured out how to work the West’s international food-aid system: Between 1997 and 2005, the year before its first nuclear test, it was bringing in an average of over a million tons of free foreign cereal each year, ending the food crisis. It is tempting to regard this as “military-first politics” in action, for military menace played an important role in the international community’s solicitude. It is impossible to imagine a helpless and stricken sub-Saharan population obtaining “temporary emergency humanitarian aid” on such a scale, for such an extended duration and with so very few conditions attached.
Central to this upswing in food aid and other freebies from abroad was the fact that North Korea got lucky with the alignment of governments in Seoul, Washington, and Tokyo. For a while, the leaders of this consortium of states were commonly willing to underwrite an exploratory policy of “sunshine” or “engagement” with the Dear Leader by offering him subventions and financial transfers. To secure his June 2000 Pyongyang Summit with the Dear Leader, for example, South Korea’s then-president had hundreds of millions of dollars secretly wired to special North Korean accounts—thereby committing crimes under South Korean law (for which he later issued pardons).
In the event, the “sunshine”-aid influx that may have rescued North Korea at its darkest moment would wane after its clandestine uranium-processing project surfaced in 2002—but the nuclear crisis that revelation triggered also made possible the next big round of North Korean international aid-harvesting.
After the 2003 U.S. invasion of Iraq, Beijing—alarmed by the possibility that the U.S. might also engage in a similar military confrontation with neighboring North Korea—organized and convened a “six-party talks” diplomatic process, ostensibly for deliberations over North Korean denuclearization, to cool things down. While the subsequent years of talking quite predictably led nowhere, North Korea’s price of attendance was apparently a steep increase in economic support from China. Between 2002 and 2008, China’s annual net balance of shipments of goods to North Korea—its exports to Pyongyang minus corresponding imports—more than quintupled, rocketing upward from less than $300 million to more than $1.5 billion. By then, North Korea had become just as economically dependent on Chinese largesse as Pyongyang had been on Soviet-bloc blandishments two decades earlier—but these inflows, and the politically subsidized trade they came with, were evidently sufficient to help at least partially revive the Dear Leader’s broken economy. From Chinese trade statistics, for example, we can infer that Chinese investments were instrumental in a resuscitation of North Korea’s mining and metallurgy sectors in the last years of Kim Jong Il’s life. (We must rely on inference here since Beijing to this day remains almost totally opaque about its economic relation with Pyongyang.)
All in all, Kim Jong Il’s North Korea took in more than $1 billion from its enemies in Washington, and nearly $4 billion from the “puppet regime” in Seoul (not including the South’s additional expenditures on “off-the-books” transfers and special economic or tourist zones in the North). And from China, North Korea scored more than $12 billion of net merchandise inflows under the Dear Leader—a sum that would look even greater if valued in today’s dollars. All the while, North Korea was also earning invisible revenues from a whole network of highly enterprising if generally illicit overseas endeavors: its “nuke-and-missile homework club” with Iran; à la carte weapons sales and military services provided to a host of dictatorships and terror groups; counterfeiting of U.S. currency; drug racketeering; insurance frauds perpetrated against firms in London’s City; and more. The Dear Leader was extensively involved in the world economy, after all—just in a Bizarro World, Legion of Super-Villains sort of way.
Thanks to highly skilled aid-wheedling, international shakedowns, and financial gangsterism, Kim Jong Il’s North Korea clawed its way back from famine to a low but acceptable new economic normal—all the while forswearing domestic economic reforms or genuinely commercial contacts with the outside world. North Korea did not completely avoid potentially fraught economic changes under Kim Jong Il, of course—that was beyond the powers even of the Dear Leader. Domestic cellphone use began during the Dear Leader’s reign, for example, as did a tentative marketization of private consumption (about which more in a moment). But these and other analogous economic changes during the Kim Jong Il era are best understood as “transition without reform,” to borrow an apt term from North Korea watcher Justin Hastings.8
The economy’s “new normal” in the Dear Leader’s final days was still at a miserable level. Although North Korean scientists could launch long-range missiles and test atomic weapons, and although North Korea’s population had reportedly achieved a fairly high level of educational attainment (higher than China’s, if North Korean figures are believed), the country’s international economic profile was Fourth World. According to the World Trade Organization, North Korea’s per capita merchandise trade levels in 2010 approximated Mali’s. Its share of world merchandise trade that same year was roughly the same as that of Zimbabwe, a country with half of North Korea’s population—and despite its measure of recovery after 1998, North Korea’s global trade share fell by more than two-thirds between 1990 and 2010, even more than Zimbabwe’s under Mugabe’s misrule in that same period.
The world is a moving target and, generally, an improving one—so national stagnation also means continuing relative decline. Although the Dear Leader bequeathed his son Kim Jong Un a system that had avoided total collapse, there was little else that could be said to commend his economic legacy.
Three: The Economic Upturn
Dear Respected Comrade Kim Jong Un faced formidable odds when he took over in late 2011. The twentysomething was a novice manager at the time of his father’s demise. Unlike the Great Leader, who had groomed his son to rule from an early age, Kim Jong Il himself put off the whole business of naming a successor for as long as he possibly could, designating the child of one of his mistresses as the next Supreme Leader only after an incapacitating stroke made the naming of an heir an unavoidable matter of state.
As Kim Jong Un took office, the planned economy was no longer functioning, and to make matters worse, North Korea’s limited market sector was beset by galloping and seemingly unstoppable inflation. His father had experimented with a limited monetization of North Korea’s tiny consumer sector in 2002 but botched it—and only made matters worse with a surprise 2009 “currency reform” that effectively confiscated private holdings above $100, drastically degrading the already low credibility of the won.
From this unpromising beginning, Kim Jong Un has proved a relative success in delivering economic results in North Korea. There is evidence that the North Korean economy has enjoyed some measure of growth, macroeconomic stabilization, and even development under his aegis.
Pyongyang, “the shrine of Juche,” may be a Potemkin showpiece—but is showpiece-ier today than in the past. Construction cranes are whirring, and whole new sections of the city have risen up. Traffic jams now sometimes clog “Pyonghattan’s” vast, previously empty boulevards. Expensive restaurants and shops purveying luxury goods increasingly dot the capital, and their customers are mainly locals, not foreigners. The upsurge in prosperity and living standards evident in Pyongyang is reportedly reflected, albeit to a more modest degree, in other urban centers as well.
Furthermore, in sharp contrast to previous North Korean trends, or other earlier Soviet-type economies, the country today not only displays considerable marketization but also market stability. This much is demonstrated by cereal prices and foreign-exchange rates in informal markets across North Korea. Over the decade between mid-2002 and mid-2012, North Korea’s won depreciated against the U.S. dollar in such markets by a factor of more than 5,000 (no, that is not a typo). But that depreciation abruptly stopped a little over five years ago, and since then the won has traded around 8,000 to the dollar (fluctuating within a band around that average). In other words, North Korea now has a stable currency that is convertible into hard currencies. Likewise, the domestic price of rice in North Korean markets suddenly stopped soaring five years ago and has been in the vicinity of 5,000 won per kilogram ever since. Whatever else one may say of these new domestic price signals from Kim Jong Un’s North Korea, they are not what one would expect to see from an economy in mounting crisis and disarray.
Finally and by no means least important: In the military realm, nuke and missile testing has accelerated. In the 13 years between Kim Jong Il’s first Taepo Dong test and his death, North Korea launched three long-range rockets and detonated two atomic devices. Kim Jong Un has been in power just over six years; his regime has already set off four nuclear tests and shot off more than a dozen long-range missiles. Some of the speed-up could reflect long-term strategic choices and might in part be affected by improvements in efficiency (cost reduction) within the WMD industrial sector. All other things being equal, though, this sharp acceleration would seem to betoken a major new infusion of resources into programs already long accorded a top priority by the North Korean state. Without a bigger economic pie and substantially greater funding sources, it is hard to see how Pyongyang could have pulled this off.
All this said, North Korea is still shockingly unproductive, still punching far below its weight, still nowhere near self-sustaining growth. Kim Jong Un’s boundless self-indulgence is manifest in costly vanity projects like a spanking-new “ski lift to nowhere” resort, Masikryong, a venture otherwise inexplicable save perhaps for the memories of childhood days in Switzerland that it might elicit.
But by distancing himself from his father’s most economically destructive policies and practices, and navigating into previously uncharted waters of economic pragmatism, Kim Jong Un has opened up heretofore ungraspable opportunities for raising living standards and building military power at one and the same time. Thus the name of his signature policy: byungjin, or “simultaneous pursuit.”
In short order after his ascension, Kim Jong Un demoted—or killed—most of the Dear Leader’s closest cadres and confidants. And less than five months after assuming power—at a ceremony commemorating his grandfather’s 100th birthday in April 2012—he made an astounding declaration, coming as it did from North Korea’s supreme ruler: “It is our party’s resolute determination to let our people…not tighten their belts again.” Translation: This is no longer your father’s dictatorship; aspiration for personal betterment is no longer a counterrevolutionary act of treason.
Dear Respected has deliberately and steadily reshaped the economy under his command. The fundamental strategic difference between Kim 2 and Kim 3 was this: Whereas the Dear Leader saw “reform” and “opening” as deadly “ideological and cultural poison” pure and simple, Dear Respected believes that North Korea could withstand a bit of that poison—actually, quite a bit—and even end up stronger for taking it.
Pyongyang’s new policy directives have been informed by this insight. In agriculture, Kim Jong Un promulgated the “June 28 Instructions” (2012), which permitted family-level work units and allowed farmers to keep 30 percent of their surplus—a bonanza compared with all previous official rules. For enterprises and industry, there were the “May 30 Measures” (2014), which allowed managers to hire and fire workers, pay them according to their productivity, and keep a portion of any profits they earned. People were, increasingly, paid with money for their work—and it was real money, as in, money that could buy things people wanted. The gradual marketization and monetization of North Korea’s civilian economy over the past two decades is a major transformation, and one critical to understanding the country today.9
By the late 1980s, North Korean leadership had fashioned a consumer sector that would have turned Stalin green with envy. No country on the planet had so tiny a share of total national output flowing to personal consumption as late Cold War North Korea—and no country had so low a fraction of its personal consumption accruing to citizens on the basis of their own market choices. By the late 1980s, North Korean planners had come closer to completely demonetizing their economy than any modern polity this side of the Khmer Rouge. Most goods, services, and supplies that North Korean families consumed were provisioned to them directly by the state, with no “interference” by actual consumer preferences. North Korean planners wished to cede as little control over their command economy as humanly possible.
Pyongyang’s near-total control of the consumption basket, however, presupposed that the state would be supplying its subjects with their daily necessities in the first place. That collapsed in the mid-1990s when the Public Distribution System simply stopped providing the full promised daily food rations to most of the population—and stopped supplying any food at all to some of the population. A terrible number of those who trusted the government to take care of them ended up perishing. To survive the famine, North Koreans had to learn to buy and sell in informal markets that began to spring up—even though such activity was against the law, and some “economic crimes” were punishable by death. The Kim Jong Il government loathed these new private markets, but it needed them to forestall wholesale calamity. Thus commenced the two-steps-forward-one-step-back dialectic of marketization that lasted the rest of the Dear Leader’s life—and after his death, marketization and monetization of the civilian economy gained further steam.
Today it is all but impossible to get by in North Korea on state-supplied provisions alone—and a wide array of goods and services, both foreign and domestic, are available for money in North Korean markets. Although formally prohibited, even real estate is for sale throughout the country, with a vibrant market for private flats in Pyongyang. And a wealthy marketeering caste has arisen: donju, or “money masters,” stereotypically a well-connected official and his enterprising wife, who use political influence as well as entrepreneurial savvy to enter this nouveau riche North Korean elite.
In case you were wondering: Yes, corruption is rife in North Korean markets. It is the necessary lubricant for all North Korean private commerce. In addition, the government expects a big cut, and such funds have been integral to the recovery of the North Korean state.
The marketization and monetization of its consumer economy, in conjunction with new agricultural and commercial incentives and a more tolerant official attitude toward informal activity, laid the groundwork for a domestic-production upswing in North Korea (and a veritable boom in private consumption, although from a very low starting point).
Unlike Asia’s “reform socialism” states, China and Vietnam, North Korea has never made a serious effort to attract private investment from abroad from real live capitalists. Pyongyang prefers large-scale foreign projects that are political in nature. Such projects are bankrolled by governments indifferent to profit, which is to say by the foreign taxpayers who can ultimately be left holding the bag. Examples include the ill-fated Kaesong Industrial Complex paid for by South Korea, as well as its doomed Kumgang Tourist Resort. For international trade and finance, the overwhelming bulk of North Korean activity still falls into two categories: 1) politically predetermined, highly subsidized economic relationships, or 2) what we might call “guerilla warfare” or “outlaw” finance.
Four: North Korea’s Friends
Preferential trade ties with China are pretty much the only game in town for Pyongyang these days. With the virtual shutdown of South Korea’s politically subsidized inter-Korean trade in 2016 following accusations that money from the Kaesong project was being used to fund the North’s missile program, China may now account for close to 90 percent of North Korea’s international commercial-merchandise trade turnover. And North Korea always receives much more than it gives in its arrangement with China, year after year.
There is, to be sure, an element of harsh capitalist bargaining within this overall relationship—but most of that is in the “people to people” bartering and petty trading at the border, largely for consumer goods. At the national level, judging by Chinese customs statistics, North Korea raked in well over a billion dollars a year in net merchandise shipments from China from 2008 through 2014—with no transparency on Beijing’s part about the mechanisms by which this ongoing transfer is financed, much less about the Chinese government’s objectives and intentions in extending this lavish lifeline.
Since 2015, official Chinese numbers suggest that Beijing’s de facto aid is down—but these look like figures deliberately fudged in the face of mounting international demands for sanctions against North Korea. It is at the very least possible that important aspects of Chinese support for the North Korean economy or its defense industries have not yet come to light. Given what is already known, though, it is indisputable that deals with China under the two latest Kims have been key to reviving North Korea’s heavy industrial sector. (For the year 2016, China reported shipping over three-quarters of a billion dollars of machinery and transport equipment to North Korea, 10 times the volume in 2003, when the six-party talks commenced.)
Vital as Chinese support may be to North Korea’s survival and economic revival, North Korea evidences no gratitude for Beijing’s largesse. Pyongyang does not “do” gratitude. Moreover, leadership in Pyongyang knows very well a bitter truth about Chinese aid that they can never utter: namely, that capricious cutbacks in free food from China in the year 1994 were the trigger for the Great North Korean Famine, which became impossible to conceal by 1995.
Apart from its Chinese lifeline, North Korea’s other main sources of international support come from “outlaw” forays into the world economy—including activities tantamount to state-sponsored organized-crime operations. These shady dealings typically attempt to generate revenues for the state that avoid international detection, often relying on the special protections and prerogatives of a sovereign state for cover.
One cannot help but be struck by the industry, ingenuity, and sophistication that have generally kept such schemes one step ahead of international authorities. Koreans in the North can be world-class innovators, too—it’s just that their chosen fields of excellence happen to be in smuggling, drug-running, money-laundering, and the like.
Some of these inventive schemes have been in the news. In recent years, for example, Pyongyang has made unknown millions abroad from what we might call its own style of human trafficking: profiting off the tens of thousands of workers in labor gangs it has sent to China, Russia, the Middle East, and even parts of Europe. No less inventive has been Pyongyang’s apparent monetization of its growing capacity for cyberwarfare through international bank robbery. In 2016, “unknown” hackers relieved the Central Bank of Bangladesh of $81 million in a spectacular heist; in late 2017, similar cyber-fingerprints were detected in a theft of $60 million from a bank in Taiwan. These are just two of many “hit and runs” orchestrated under the Kim Jong Un crime family. And as the WannaCry ransomware attack last year demonstrated by infecting hundreds of thousands of computers the world over, vastly greater dividends from cybercrime may lie just over the horizon.
Then there is North Korea’s signature global service industry: WMD proliferation. For obvious reasons, most of this work never makes the news. No one outside Kim Jong Un’s court probably knows just how much this nefarious business is bringing in these days. These unobservable flows, however, may be consequential. Consider this: Barely weeks after Tehran inked its September 2012 Scientific Cooperation Agreement with Pyongyang, the won suddenly ended its decade-long freefall and finally achieved exchange-rate stability. North Korea may have had additional, still concealed, operations that were also paying off at the same time as that Iranian deal, of course. But either way, the deal clearly marked a turning point in North Korea’s macroeconomic fortunes, and the stabilization of exchange rates and domestic cereal prices probably could not have occurred without an open spigot of foreign cash.
In sum, the hallmarks of Jong-Un-omics economics would appear to be new revenues from foreign sources, along with the new flows of funds derived from privatization and growth at home. These monies have apparently sufficed not only to stabilize North Korea’s previously toxic currency, and to bring an end to runaway inflation in North Korean key private markets, but also to abet Pyongyang’s nuclear and ballistic ambitions. This, at least, would seem to be the most plausible reconstruction of the limited but meaningful evidence from the jigsaw puzzle that is the North Korean economy today.
To repeat: While we should recognize the existence of this economic upswing we should also keep its scale in perspective. All one need do is consider the sad, stunning space photos of North Korea at night—the satellite shots revealing a territory almost pitch-black, while the rest of Northeast Asia is glowing with light. They attest better than any available statistics to the limits of economic recovery under Kim Jong Un.
Among the other implications of that space imagery, the North simply does not have the pocketbook for a wholesale modernization of its conventional army and a nuke-missile program. For now at least, most of the military’s equipment, apart from critical nuclear-related pockets like submarine production, remains outdated and ill-suited for the tasks originally assigned. Today, Kim Jong Un cannot credibly threaten to roll in and occupy South Korea. But Kim Jong Un is on track to manufacture enough nuclear matches to burn the place down, with Tokyo and Washington thrown in for good measure, in the foreseeable future.
Five: How to Put Pressure on Pyongyang
Given what we know about the North Korean economy, can America and the world community keep Pyongyang from reaching its ultimate nuclear objectives through a real economic-pressure campaign?
We do not know just how close North Korea is to perfecting its weaponization of ballistic missiles, or how many nuclear weapons the North currently possesses. We also do not know as much as we need to about North Korea’s strategic inventories and reserves. If Pyongyang were stopped in its tracks today, its nuclear and missile work would require unwavering vigilance and far-reaching containment for the remaining life of the regime. That said, a serious international campaign of trade and financial sanctions—led by America, ruthlessly executed, and starting immediately—could very significantly slow the pace of Pyongyang’s ongoing nuclear-ballistic march. And if we are in it for the long haul, a serious sanctions campaign could eventually promise the effective suffocation of the entire North Korean military economy.
An international economic campaign of this sort won’t be easy (though America has many more cards in her hand than many now appreciate). It probably won’t be pretty, either. But in any case, it is the world’s last chance to thwart North Korea’s nuclear ambitions by nonmilitary means.
Let’s start with the unpleasant truths. We must recognize that economic pressure will not alter the intentions of the Kim family regime—ever. We must dispense with the fantasy, still inexplicably maintained in various esteemed diplomatic circles and Western universities, that Pyongyang can somehow be pressured—or bribed—at this late stage into changing its mind about its multi-decade march to a credible nuke and missile arsenal. There is no “bringing North Korea back to the table” that ends with CVID—comprehensive, verifiable, irreversible denuclearization. Period.
So much for the bad news. The rest of the news about the outlook for sanctions against North Korea, fortunately, is better than we usually hear.
Many authoritative voices seem to think sanctions have little chance of influencing North Korea’s nuclear trajectory. Economic historians note that the record for coercive economic diplomacy is poor and has been for centuries. Policy wonks and foreign-affairs experts add that successive rounds of UN and international economic sanctions seem to have had no real bite so far against North Korea. These pessimistic assessments, however, misread the prospects for international economic pressure against North Korea on two important counts.
As poor as the general record of coercive economic diplomacy may be, North Korea is not exactly a typical economy. It is an outlier—it’s world-class dysfunctional, recent changes under Dear Respected notwithstanding. The economy is incapable of growth (or for that matter, even stagnation) without steady inflows of financial support from abroad to keep it on its feet. Remember: When net aid from abroad sharply dropped (but did not end) in the 1990s, that was enough to send North Korea spiraling downward into paralysis and mass famine. The North Korean regime in short, is a poster child for a successful international campaign of economic strangulation. Despite Pyongyang’s nonsense about “self-reliance,” it is uniquely vulnerable to the cutoff of foreign money and subvention.
Kim Jong Un has not yet faced anything even remotely resembling an international campaign of “maximum economic pressure.” The continuing stability of North Korea’s foreign exchange rate and domestic food prices pointedly suggest international sanctions have not yet greatly impacted North Korea. But few foreign-policy experts, and even fewer general readers, seem aware of how flimsy were the array of sanctions imposed on North Korea by the UN and U.S. during the George W. Bush and Obama years.
Consider first the successive rounds of UN Security Council sanctions lodged against the regime since its first atomic test in 2006. China and Russia flagrantly and routinely violate the very sanctions their own Security Council representatives voted to impose. Most countries around the world still ignore them, too. In early 2017, the UN’s Panel of Experts on the sanctions reported that 116 of the UN’s 193 members had not yet bothered even to file implementation reports on the then-latest round (UNSC 2270, levied in response to Pyongyang’s fifth nuclear blast). The previous year, the Panel noted that 90 countries had never reported on any of the sanction resolutions against North Korea (eight at that time, the first of them ratified a decade before that report). And filing a report on these sanctions resolutions is not the same thing as enforcing them. Several countries with whom Washington enjoys ostensibly friendly relations have turned a blind eye to illicit North Korean activities on their soil for many years (Malaysia, Singapore, and some of the Gulf States being among the more egregious examples).
When it comes to Washington’s own economic measures, furthermore, North Korea is still far from being “sanctioned out,” no matter the received wisdom. In the final year of the Obama administration, according to Anthony Ruggiero of the Foundation for Defense of Democracies, fewer entities and individuals from North Korea were under U.S. Treasury Department sanction than those from seven other countries, including Zimbabwe and Sudan. While the Trump administration has been much more serious about sanctioning North Korea, Ruggiero testified that as of late summer 2017, North Korea nonetheless remained less sanctioned than either Syria or Iran. For some mystifying reason, moreover, North Korea was not put back on the State Department’s list of strictured “state sponsors of terrorism” until the end of 2017, after enjoying a nearly decade-long holiday off that roster.
As 2018 commences, three big changes augur well for the prospect of devastating “shock and awe” sanctions against the North Korean system. First: At the end of 2017, the Security Council endorsed a broad new writ and scope for sanctions against North Korea, dispensing with the earlier “marksman” approach of picking off particular military-related firms or individuals and embracing instead the “blockbuster” approach of crippling North Korea’s entire military-industrial complex. The new sanctions, among other things, ban all industrial imports by North Korea, severely cut permitted energy imports, and require UN member governments to “seize, inspect, and freeze” vessels violating some of the new restrictions.
Second: In late 2017, the U.S. Treasury announced new and much more sweeping authority for North Korea sanctions, granting U.S. officials wide discretion to impose what are known as “secondary sanctions.” Henceforth any business or person engaging in any kind of commercial or financial transactions with North Korea could be severely penalized, with punishments including fines, seizure or forfeiture of assets, prohibition against any commerce in or with the U.S., and being cut off from the worldwide clearing system for dollar-based financial settlements.
Finally, and by no means unrelated to these other changes, is the third change: the advent of the Trump administration. Under President Trump and his team, there appears to be a qualitative change in America’s North Korea policy—one that accords the North Korean threat a higher priority, and more unblinking attention, than it has been granted by any of Trump’s predecessors. The White House calls this new approach to North Korea a policy of “maximum pressure.”
Six: The American Role
Trump’s address before South Korea’s National Assembly last November on the North Korea problem was the most incisive, and moving, statement on the topic ever delivered by an American president. Whatever else may be said of him, Trump is keenly aware that the North Korean threat he inherited was allowed to fester and worsen under each of the four men in the Oval Office immediately before him. He appears to have no intention of continuing that tradition.
The Achilles’ heel of the North Korean economy—and thus, of Pyongyang’s nuclear and missile programs—is its existential dependence on foreign aid and outside money. The fortress-prison country is an operation that cannot be sustained on its own. To date, North Korea has skillfully extracted wherewithal and extorted financial concessions out of a largely unfriendly world. To jam the gears of the North Korean war machine, the international community must recognize, and finally begin systematically exploiting, Pyongyang’s unique economic weakness. This will require a campaign of economic pressure worthy of the name—and the pieces for such a campaign are already falling into place.
In broad strokes, what would this “maximum economic pressure” campaign look like? It must be Washington-led, since it will not coalesce spontaneously. To carry it out most effectively, diplomacy will be crucial: Alliance coordination and the building and maintenance of motivated coalitions are obvious force multipliers for this exercise. But the U.S. has unique international strengths that allow us to act unilaterally and with great consequence when necessary.
For starters, now that we ourselves have relisted North Korea as a state sponsor of terrorism, we have a stronger case for pressing governments around the world to shut down the regime’s embassies, trade missions, and other facilities located on their soil. Not necessarily to sever diplomatic ties, much less end all communication, with Pyongyang: just to deprive North Korea of safe havens for their illegal rackets on foreign shores. Given North Korea’s standard operating procedure overseas, affording Pyongyang an embassy in one’s country is like offering diplomatic immunity to the Mafia. The Trump administration has begun some of this advocacy already and has some initial results to show for its troubles. In conjunction with a consortium of like-minded states (including Japan), a full-court press could gain true international momentum. At the very least, this would disrupt some of North Korea’s illegal rackets and reduce the take from them.
Washington can also take the lead in lobbying governments to shut down the North Korean work crews operating within their own countries—these are too close to slave labor for comfort. This need not be quiet diplomacy. The complicit governments in question, including Beijing and Putin’s Kremlin, deserve to be called out publicly if they are intransigent. (The wording of the latest round of Security Council sanctions calls for shutting down such arrangements within 24 months, an amendment Moscow negotiated for—but there is no reason that the U.S. or independent human-rights groups should not try to speed up that timetable.) The U.S. also has options for penalizing trading partners who violate internationally recognized labor standards, which is to say we can affect the cost-benefit calculus for governments that tolerate North Korea’s odious practices in their own backyards.
This brings us to a rather larger diplomatic task: confronting China and Russia about their continuing financial malfeasance on North Korea. The scope and scale of China’s furtive support for North Korea dwarfs Russia’s, of course—but that is no reason to give the Kremlin a pass. These two states have long been playing a double game—one that must come to an end starting now.
Seven: The Russians and the Chinese
Contrary to some hand-wringing in Washington and elsewhere, the U.S. is by no means devoid of options in facing down China and Russia for their economic enablement of the Kim family regime. As already noted, Washington possesses an extraordinarily powerful tool for inducing their compliance: the U.S. dollar—the most important reserve currency in the world economic order. America gets to decide who can, and who cannot, engage in the dollar-denominated financial transactions with the myriad of correspondent banks serving the globe, for which the Federal Reserve Bank is the clearing house. Existing legislation and executive orders already provide the U.S. government with a panoply of instruments for inflicting nuanced and escalating economic penalties and losses on financial institutions, corporations, and private individuals who rely upon U.S. correspondent banks but engage in illegal or forbidden commerce with North Korea.
So far, the United States government has used only minor pinpoint-pinprick secondary sanctions against Chinese and Russian parties that violate restrictions on dealings with North Korea. Both nations face potentially major economic costs if they do not address and control such violations, should we choose to impose them.
It is no secret, for example, that the Chinese banking system is highly leveraged and that some of China’s largest banks are in what we might call a financially delicate situation. Does Beijing really want to find out whether one of these major concerns can survive a Treasury Department-Justice Department inquiry for North Korea infringements, much less the weight of actual secondary sanctions—or to find out what happens at home and in international financial markets if it looks as if a major Chinese bank might fail on that account?
If the Kremlin and Beijing believe we mean business, they will have reason to suppress illicit deals with North Korea—but convincing them we mean business is our responsibility. Washington has been curiously hesitant here, possibly for fear that Beijing or the Kremlin, or both, would respond by sabotaging any further UN sanctions. But we now have pretty much what we need from UN resolutions for a campaign of “maximum economic pressure” on North Korea—so the time for horse-trading and slow-walking is over. And while we are at it, these governments’ official economic support for North Korea shouldn’t be off the table. Isn’t it time to spotlight and track those flows, too?
As we work to rein in China and Russia, we should not lose sight of the money that North Korea receives through arrangements with other governments—including states in Africa and the Middle East that receive U.S. foreign aid. Yet much of what Washington needs to do in this economic campaign, alas, is currently unknown. This is a failure of our intelligence community that must be immediately addressed if “maximum economic pressure” is to stand a chance of ending up as more than just a slogan.
By the very nature of intelligence activity, spy agencies cannot take credit for many of their successes. But the U.S. intelligence community doesn’t deserve a slap on the back for its performance in this particular area. It should be something of an embarrassment, for example, that some of the best work mapping out the connections between Chinese front companies and the North Korean military these days should apparently come from a small think tank, C4ADS, that relies entirely on open sources. And that is just one small example of intelligence insufficiency. Our government also appears to know much less than it should about the financial relations between Pyongyang and its backers in Tehran, North Korea’s money ties with terrorist groups, and its adventures in crypto-currencies and other harder-to-trace instruments of finance.
Much of what is currently unknown—by our government—about North Korea’s covert international financial networks and overseas holdings is in fact knowable, given better legwork and intelligence. The story of the U.S. government’s interagency Illicit Activities Initiative (2001–6), which methodically mapped out North Korea’s money trails before being derailed by bureaucratic infighting under the George W. Bush administration, provides an “existence proof” that such research can be done. North Korea’s overseas financial networks have had more than a decade since the demise of IAI to evolve and hide their tracks—so a new IAI-style effort would have to play catch-up.
With the information we could gather from a well-funded and coordinated intelligence initiative, we can help shut down North Korea’s worldwide criminal enterprises, arrest their international accomplices, freeze and seize violators’ overseas assets (not just Kim Jong Un’s assets: think Iran, Syria, Hezbollah, and the rest), and levy potentially devastating fines against commercial and financial concerns that willfully aid North Korea in violating the law. We can also improve the efficacy of existing proliferation-security efforts.
With better intelligence, better international coordination, and the will to get the job done, an enhanced “maximum economic pressure” policy could swiftly and severely cut both North Korea’s international revenues and the vital flows of foreign supplies that sustain the economy. An enhanced Proliferation Security Initiative (PSI), indeed, could use interdiction not only to monitor the goods entering North Korea but also to regulate and, as necessary, suppress that level. (UN sanctions, by the way, make provisions for humanitarian imports into North Korea a matter the U.S. and others must attend to faithfully.) Yes, this is economic warfare, and it can be conducted with much more sophisticated tools than were available in the 1940s. In fact, it should be possible through such a campaign to send the North Korean economy—and the North Korean military economy—into shock, possibly even in fairly short order.
Eight: Success and Its Failures
If comprehensive sanctions and counter-proliferation against North Korea fail, we enter into a new world with darker and much less pleasant options. But what if, by some measure of success, they turn out to succeed? What then?
In addition to their intended consequences, successful policies always have unintended ones. Three potential consequences of an effective economic-pressure campaign against the North Korean regime deserve special consideration in advance.
The first concerns the role of North Korea’s donju elite in a future where North Korea is increasingly squeezed economically. These “money masters,” who until now have enjoyed waxing wealth and have lived with rising expectations under Kim Jong Un, would stand to suffer very sharp financial loss. What would a serious reversal in the fortunes of this privileged element in North Korean society mean for elite cohesion and for regime dynamics? Even North Korea has domestic politics. Poorly as we may be able to apprise North Korean politics, it would behoove us to try to understand in advance how such a change would alter the realm of the possible within the country—and what new opportunities such internal developments might present.
Second is the all-too-likely possibility that North Korea would careen back into famine under an effective sanctions campaign—and not because Pyongyang would be incapable of purchasing or procuring sufficient food to feed its populace. The reason North Koreans starved last time was the government’s dreadful songbun system, still very much in force today. Songbun is a unique North Korean instrument of social control that carefully subdivides the North Korean populace into “core,” “wavering,” and “hostile” classes, lavishing benefits and meting out penalties according to one’s station. Life chances in North Korea—and no less important, death chances—turn on one’s assigned class. Just as it is a safe bet that virtually no one outside the “core classes” has amassed great donju riches, so too death from starvation is almost entirely consigned to the state’s designated enemies from the “hostile classes.” Only “intrusive aid” (provided on site by impartial outsiders) and public diplomacy, including calling out Dear Respected on this vile practice, stand to mitigate the toll of the impending humanitarian-cum-hostage crisis should “maximum economic pressure” work.
Finally, there are the countermeasures Pyongyang will surely adopt if the economic-pressure campaign is attaining a measure of success. These will be intended to terrify and to break the will of the sanctioners. North Korean leaders are practiced masters of white-knuckle, bared-fang diplomacy—and they would naturally regard the stakes in this contest as particularly high. No national directorate is so expert in brinkmanship or so consummate at carefully gaming through seeming “outbursts” well in advance.
North Korea will test the stomach and the will of the pressure alliance, threatening what sees as the campaign’s weakest and the most exposed elements and ranks. These probes and tests may be military in nature, with a range of options that could well include threats of nuclear war. Pyongyang will try to make Washington and the international community fear that they are facing a “Japan 1941 moment,” with a cornered Kim family regime: a déjà vu of the drumroll that led to World War II in the Pacific, only this time against a nuclear-armed adversary.
This would be a point of incalculable danger. There are good reasons to think North Korea would not resort to first use of nuclear weapons, most compelling among them, its own state-enshrined doctrine known as “Ten Principles for the Establishment of a Monolithic Ideology.” (The essence of this doctrine: The Hive must keep the Queen safe, and at all cost.) But there is no sugarcoating the terrible risks, including risks of miscalculation, inherent in North Korea’s most likely countertactics.
Any way you look at it, North Korea’s adversaries are in for a long and bumpy ride. The alternative to thwarting North Korea’s war drive now is permitting Pyongyang to prepare to fight and win a limited nuclear war in the future, at a time and place of its own choosing, when the situation for America and her allies may be even more perilous.
Like it or not, Pyongyang plays for keeps, and we are in this with them for the long game. The next move is ours.
1 Full disclosure: I am one of those who seriously underestimated North Korea’s resilience in the 1990s. Twenty years ago, I would have thought it almost unimaginable for the North Korean state to survive to this day. Needless to say, subsequent events have proved otherwise, and studying my own mistakes has led to the analysis under way here.
2 Joan Robinson, “Korean Miracle” Monthly Review, January 1965, Vol. 16, No. 8, pp. 541–549.
3 Korea, the economic race between the north and the south: a research paper, ER 78-10008, January 1978, CIA.
4 Kim Il Sung, Works, Vol. 31 (Pyongyang: Foreign Languages Publishing House, 1987), p.76.
5 Nicholas Eberstadt and Judith Banister, The Population of North Korea. (Berkeley, CA: University of California, 1992).
6 Kim Il Sung, Selected Works, Vol. 5 (Pyongyang: Foreign Languages Publishing House, 1972), p. 431.
7 On this man-made, and completely unnecessary, tragedy, see Stephan Haggard and Marcus Noland, Famine in North Korea: Markets, Aid and Reform, (New York: Columbia University Press, 2007).
8 Justin V. Hastings, A Most Enterprising Country: North Korea in the Global Economy. (Ithaca NY: Cornell University Press, 2016).
9Perhaps the best analysis of this transformation is Kim Byung-Yeon, The North Korean Economy: Collapse and Transition. (New York: Cambridge Univer sity Press, 2017)
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
s I write, Michael Wolff’s Fire and Fury has become a mere husk of a book, emptied of everything consumable and tasty. And it’s only been out a week! In the hinterlands, the book is selling briskly, but here in Washington, we already find ourselves in the final phase of a mass hysteria, a hangover that we would call the Woodward Detumescence.
Woodward is Bob Woodward, of course. Every few years, for more than 30 years, Woodward has sent Washington reeling with a book-length, insider account of one administration after another, presenting government as high drama, with a glittering cast of villains and heroes.
The sequence of the symptoms seldom varies. First comes the Buildup. We hear premonitory rumblings: Freshly minted Woodward revelations are on the way! His publisher declares an embargo on the book, mostly as a tease. Another reporter writes an unauthorized report guessing at what the revelations might be. Washington can scarcely breathe. At last the first excerpts appear in a three-part serial in Woodward’s home paper, the Washington Post.
We enter the Swoon.
The excerpts tell of betrayals and estrangements, shouting matches and tearful reconciliations, tough decisions and disappointing failures of nerve, all at the highest levels of government. Woodward goes on TV shows to explain his findings. Sources attack him; he stands by his book. The frenzy intensifies, the breathing is labored, until, at last, comes the Spasm, as all the characters from the book refuse to comment on a “work of tabloid fiction.”
Then the newspaper excerpts end, there is a collapsing sigh, a dying fall, and the physical book, the thing itself, appears. The text seems an afterthought, limp as a wind-sock and, by now, even less interesting. If there were more revelations to be found in its pages, after all, we would have read them already. We skulk back to the routines of what passes for normal life in Washington, slightly abashed at our momentary loss of self-control. This is the Woodward Detumescence. Shakespeare foresaw it in a sonnet: “the expense of spirit in a waste of shame.”
The Fire and Fury frenzy omitted some of these steps, prolonged others. It was touched off by an excerpt in New York, appearing a week before the book’s original publication date. Running to roughly 7,000 words, the excerpt was densely packed and so juicy it should have come with napkins. The article’s revelations about White House backbiting and self-loathing are by now universally known, and have been from the moment the excerpt hit the Web. One thing they make plain is that Michael Wolff bears little resemblance to Bob Woodward. Over a long career, our Bob has shown himself to be a tireless and meticulous reporter. He is a creature of Washington, besotted by government; Woodward never found a briefing paper he wouldn’t happily read, as long as it was none of his business.
Wolff, on the other hand, is an incarnation of Manhattan media. He’s a 21st-century J.J. Hunsecker, the gossip columnist in the great New York movie Sweet Smell of Success, although, unlike J.J., he has a pleasing prose style and a sense of irony. His curiosity about the workings of government and the shadings of public policy is nonexistent. “Trump,” Wolff writes with typical condescension, “had little or no interest in the central Republican goal of repealing Obamacare.” Neither does Wolff. Woodward would have given us blow-by-blow accounts of committee markups. Wolff mentions Obamacare only glancingly, even though it was by far the most consequential failure of Trump’s first year.
If you want to learn how Trump constructs that Dreamsicle swirl that rests on the top of his head, or the skinny on Steve Bannon’s sartorial habits, then Wolff is your man. He tries to tell his story chronologically, but he occasionally runs out of things to say and has to vamp until the timeline lets him pop in a new bit of shocking gossip. Early in the book, for example, after he has established that Trump is reviled and mocked by nearly everyone who works for him, Wolff leads us into a tutorial on The Best and the Brightest, David Halberstam’s doorstop on the 1960s White House wise men and whiz kids who thought it would be a great idea to get in a land war in Southeast Asia. He calls Halberstam’s book a “cautionary tale about the 1960s establishment.” Wolff’s chin-pulling goes on for several hundred words. Apparently, Steve Bannon had had the book on his desk.
This is interesting, I guess, and so are the excessive digressions about New York real estate, Manhattan’s media culture, the evolution of grande dames into postfeminist socialites, and many other subjects that are orthogonal to the book’s purpose. If you’ve bought Fire and Fury, chances are, you wanted to learn things you didn’t know about the first year of the Trump administration. The New York excerpt was chockablock with such stuff, told in sharply drawn scenes and vivid, verbatim quotes. But the book dwells much more on general impressions, flecked here and there with scandalous asides. In these longeurs—most of the book—Wolff writes at an odd remove, from the middle distance. The prose loses its immediacy and becomes diffuse.
He’s not so much padding his book as filibustering his readers, perhaps hoping to deflect a reader’s attention from another revelation: He really hasn’t delivered the goods. All of Wolff’s most scandalous material was filleted and packed into the New York excerpt. Listening to discussions among friends and colleagues, I keep hearing the same items, all from the magazine: Staffers think Trump might be (literally) illiterate, Steve Bannon thinks the Mueller investigation puts Trump’s family in legal jeopardy, the president uses vulgar language when talking about women. He is a child, Wolff wants us to know, and the disorder of his government is directly traceable to that alarming fact.
And it is indeed alarming, but nobody who has followed Trump’s Twitter feed or watched his news conferences will think it’s news. Wolff wrote a scintillating 7,000-word magazine article; the problem is that he spread it over a 328-page book. The rumor has gone around (hey, if he can do it, so can I) that before submitting his manuscript, Wolff warned his publisher that it didn’t contain much that was new.
This explains a lot. Wolff clearly was unprepared for the explosion set off by the magazine article. You could see it in his halting explanations of his journalism techniques. When his quotes were questioned, he let it be known that he had “dozens of hours” of tapes. (Other news reports inflated the number to hundreds.) When quotes continued to be questioned, he was asked, by colleagues and interviewers, to release the tapes. He refused. Wolff said his book threatens to bring down the president—on evidence that he alone has and won’t produce.
Spoken like a true journalist! Much has been made of this modern Hunsecker’s techniques. One explanation for the candor of his sources is that Wolff gained their confidence by misleading them about his intentions; they had concluded he was writing a book that would show the administration in a kinder light. “I said what I had to to get the story,” he proudly told one interviewer. Many of his colleagues in the press have shrugged at his willful misdirection—his deception, in fact—as a standard trick of the trade.
They’re probably right. But they demonstrated again the utter detachment of journalists from normal life. Whole professions are generally and rightly maligned—trial lawyers, car salesmen, lobbyists—because ordinary people see that prevarication is built into their work. When it comes to the people who write the books they read, they have a right to ask how far the deception goes. If a writer will mislead his sources, how can we be sure he won’t he do the same to his readers?
“My evidence is the book,” Wolff responds. I’m not sure what he means. In any case, as the Detumescence recedes, it becomes clearer that his evidence is thin. The book isn’t particularly good journalism, but it’s a triumph of marketing. Our Trump hatred has been targeted with such precision that we’ll lower any standard to embrace Fire and Fury, even if the tale as told signifies nothing, or nothing much.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
An uncontroversial museum still manages to offend the ignorant
t one point during his 2000 campaign, George W. Bush gave his listeners a folksy admonition: “Don’t be takin’ a speck out of your neighbor’s eye when you got a log in your own.” This amused Frank Bruni of the New York Times, who called it “an interesting variation on the saying about the pot and the kettle.” Bruni’s words in turn amused the substantial portion of Americans who knew that Bush was actually quoting Matthew 7:3. To them it was simply unimaginable that someone could graduate Phi Beta Kappa with a degree in English and subsequently study at the Columbia School of Journalism, as Bruni did, without having once encountered the Sermon on the Mount. The anecdote revealed the extent to which, in the space of a few generations, America went from habitual Bible reading to biblical illiteracy, and of the most abject and utter kind. This is the justification for the Museum of the Bible.
The Museum of the Bible, which opened in Washington, D.C., in November, is an enterprise of appropriately pharaonic ambition. At a capacious 430,000 square feet, it cost half a billion dollars to build, all of it contributed privately. It is the brainchild of Steve Green, the president of Hobby Lobby, the chain of arts-and-crafts supply stores that successfully challenged the contraception mandate of Obamacare. Indeed, to those who felt the Burwell vs. Hobby Lobby decision was a catastrophic setback to the separation of church and state, the coming of the Museum of the Bible seemed nothing less than the physical manifestation of that threat—an unwelcome expression of evangelical political power standing in plain sight of the Capitol. Burwell vs. Hobby Lobby has loomed large in the coverage of the museum, as has the Green family—as well as the $3 million fine levied on Hobby Lobby for illegally importing cuneiform tablets from Iraq.
But those who looked forward to exposing the museum as a bigoted and ignorant enterprise, with a laughably literal view of biblical truth, have been bitterly disappointed. Its exhibitions are conspicuously even-handed and scholarly, and not at all sectarian. The Museum of the Bible is no vehicle of theological indoctrination. If anything, it errs in the other direction. When it was first incorporated as a nonprofit organization in 2010, it pledged itself “to inspire confidence in the absolute authority and reliability of the Bible.” It has quietly lowered its sights since, and now seeks only “to invite all people to engage with the history, narrative, and impact of the Bible.” This makes the museum less objectionable (who can object to an invitation?), but a less incendiary Bible is also a less interesting one. The danger of the Museum of the Bible is that by sidestepping the question of biblical truth it might downgrade the Good Book, as it were, into one of the Great Books.W
ith all their resources, the Green family might easily have commissioned a celebrity architect to build a prodigy of a museum. But they did not want a building that would compete with its contents. Instead, they bought a 90-year-old cold-storage warehouse two blocks south of the Mall, and into its windowless brick shell they inserted six stories of exhibition and administrative space. The interior is intelligently planned but hardly remarkable, and nothing about its materials, finishes, or details speaks of the Bible or antiquity. If anything, it has the glossy impersonal cheeriness of contemporary hotel architecture.1
The heart of the museum is in the exhibitions of the third floor (The Stories of the Bible) and the fourth (The History of the Bible). These are utterly different in texture and tone, but they work in tandem—one delivering sensation and the other information. This is hardly a new distinction; it is the difference between the stained-glass window and the sermon.
The Stories of the Bible are told through crowd-pleasing “immersive” galleries—the fashionable term for displays in which a coordinated battery of sound effects, musical cues, dramatic lighting, and moving forms are combined to induce an overwhelming sensory experience in the viewer. These were devised by BRC Imagination Arts, a design firm that specializes in corporate branding—as they put it, in “creating emotionally engaging experiences that generate lasting brand love.” When it comes to emotionally engaging material, the exhibits Genesis and Exodus offer at least as much as the Heineken Experience (another recent BRC creation) and here the designers have outdone themselves. Noah’s Ark presents “a unique, stylized representation of the great flood, they tell us.”
“Stacks of boxes tower over them. Inside each box are artistic representations of animals—two by two—lit by flickering candlelight. Guests hear the raging of the storm outside and the creaking of the wooden ship.”
Somewhat later, although not until they have seen “a hyssop bush bursting into flames from the story of Moses,” visitors themselves can part the Red Sea, or an abstraction thereof, created by a web of taut metal cables shimmering under blue light. (It is curious how the highly cinematic events of the Hebrew Bible lend themselves to abstract expression.)
By contrast, the World of Jesus is rendered in literal terms, by means of a realistic re-creation of a first-century village complete with actors in period costume. In the Galilee Theater, visitors can watch a short film and see John the Baptist confronting King Herod (as played by John Rhys-Davies). Even those of us who are allergic to historic reenactments will see that it is carried through with extreme competence and attention to detail. What is there is done well; it is what is not there that has caused a good detail of quiet grumbling. To the bafflement of many, the central events of the Christian Bible—the Crucifixion and Resurrection—are not represented. Were there fears that a scene of unspeakable horror would disturb the museum’s upbeat, family-friendly ambiance? Or is it that its academic advisors come from the mainstream of contemporary Biblical studies, for whom the Resurrection is not a truth but a trope? Perhaps both factors are at play.
Another curious aspect of the display, though unhappy, is understandable: The Hebrew and Christian Bibles are rendered as two segregated and self-contained experiences, and like oil and vinegar, the exhibition paths are not allowed to mix. Unfortunately, the visitor who has waited for the one is unlikely to stand in line again for the other. One can appreciate that the organizers wanted to avoid a linear sequence in which the Hebrew Bible serves as mere prelude to the New, but in the process, the relationship between the two is lost. Surely a compromise might have been found, perhaps with the occasional physical passage between the two, so that the viewer might move back and forth and make his own connections—alas, a proposition that is heretical in today’s world of manipulative museology.
If the third floor gives us the stories in the Bible, the fourth gives us the book itself—not only the text itself but its translations, copies, orthography, printing, binding, illustrations and all else that is associated with a literary artifact. The oldest objects here (although of disputed authenticity) are tiny fragments of the Dead Sea Scrolls, and from them to the most recent translations, one is struck by the fastidious probity with which the text was transmitted. Here we learn the high stakes of tampering with the Bible in the story of how the 14th-century theologian John Wycliffe was posthumously excommunicated for daring to make the first English translation. We also learn how the Bible acted to codify and order regional dialects into a national language; Martin Luther’s translation did this for the German language just as the King James translation did a century later for English. A remarkable display shows the innumerable phrases from the Bible that have entered vernacular speech in the world’s languages, some of which I did not know (e.g., “den of thieves,” “suffer fools gladly,” “at their wit’s end,” etc.
Here one senses a certain reservation—a curatorial suspicion, perhaps, that vellum manuscripts and printed books are intrinsically boring. There is nothing an exhibition designer fears more than a bored visitor. This would account for the rather plaintive effort to provide visual relief in the form of arresting objects: a facsimile of the Liberty Bell with its inscription from Leviticus, a tableau of books burned by the Nazis, and statues of Galileo and Isaac Newton. These diversions suggest that the designers did not trust the words themselves and their hotly disputed variants and interpretations to generate interest on their own.
This is a lost opportunity. For instance, the history of the English translations would have been far more effective with a comparison of representative examples. One might illustrate various renderings of the 23rd Psalm, juxtaposing the lapidary King James version (“The Lord is my shepherd; I shall not want”) with the explanatory translation of the International Standard Version (“The Lord is the one who is shepherding me; I lack nothing”) or the willful flatness of the Good News Bible (“The Lord is my shepherd; I have everything I need”). A few examples from the recent push to purge the Bible of any and all sexist language would also have been eye-opening. To refer to this trend blithely in passing, as the wall labels do, without confronting the viewers with the sobering reality of a gender-neutral Bible is a sign of either haste or indifference.
And for those who are not fascinated by the fact that the neuter possessive its appears just once in the entire King James translation, they still have the chance to take a peek at Elvis Presley’s personal copy of the Bible.T
he truth is, the Museum of the Bible is as innocuous, gregarious, multifaceted, and congenial an institution as one might have hoped. It certainly does not preach biblical inerrancy; the attentive reader will see that Noah’s flood is anticipated by the much older flood story in the epic of Gilgamesh, complete with divine instructions on building the ark.
Nonetheless, the museum has been greeted with extraordinary hostility, although of a strangely unfocused sort. It has hardly been “dogged by scandal,” as Business Insider charged, apart from the importation of antique materials with a false provenance (something of which the Metropolitan Museum of Art and the Getty Museum have both been guilty). The real objection is not its business practices or its theology (which it wears so lightly as to be invisible), but rather that it comes from the wrong side of the cultural tracks. One has the sense that the museum is a social faux pas, that the wrong guests have crashed the party, blundering uninvited into Washington and violating rules of which they are ignorant. CityLab, the digital magazine of the Atlantic, expressed this attitude most pithily when it called the museum “pure, 100 percent, uncut megaplex evangelical white Protestantism…megachurch concentrate.”
The charge that the museum presents a narrow and exclusively white version of Protestantism is undercut by a single visit; the audience is comprehensively ecumenical and international. But it has been repeated endlessly nonetheless, in part because of the recent publication of Bible Nation: The United States of Hobby Lobby, by Candida R. Moss and Joel S. Baden—a furiously ambitious attempt to discredit the museum, its theology, its founders, and Hobby Lobby itself. (This may be the first time a book has been published condemning a museum before it was built.) Moss first came to public attention in 2013 with The Myth of Persecution: How Early Christians Invented a Story of Martyrdom, which charges early Christians with forging accounts of their suppression. Bible Nation is written in a similarly debunking spirit. For her, the “thousands of fragments of contradictory material” in the Bible make it pointless to try to make of it a coherent or meaningful document. The insights of contemporary biblical scholarship, she says with conspicuous exasperation, ought to be “a faith killer.”
Clearly they have been for her. But if anything, the museum’s fourth floor testifies to the opposite: This is a building built by believers for whom the analysis of the materials contained within is a noble task. The curators have taken painstaking efforts to get it right, as did those scribes who through the millennia worked to reconcile the discrepancies, to choose among the contradictory variants the ones that are most rigorously supported. And where the conflicting documents are irreconcilable—as between the two opening chapters of Genesis, or between the four Gospels—the procedure has always been to preserve multiple sources rather than impose an arbitrary uniformity. In the end, the Museum of the Bible pitches it about right.
Its greatest surprise is that it makes no truth claim. The central propositions of the Hebrew Bible (God’s covenant with his chosen people) and the Christian Bible (Christ’s Resurrection) are subordinated to the existence of the Books that carry those propositions. One might imagine that a museum devoted to other monumental culture-shaping books, say The Iliad and The Odyssey, would look similar in approach.
And of course they are right to have done so. The place to make claims to the truth in these cases is a church or synagogue, not a museum. But even the lesser claim that the Museum of the Bible makes, that the Bible is a foundational document of our civilization, is to many an unwelcome one. And as biblical ignorance grows, the claim grows progressively more unwelcome. The Bible seems to be one of those books that the less people know about it, the less they like it. And for those who know it only as a “Bronze Age document” (one of Christopher Hitchens’s favorite epithets) and from some of the livelier passages in Leviticus, it is an offensive absurdity.
Writing in the Washington Post, the novelist and art historian Noah Charney asserted that “in Washington, separation of church and state isn’t just a principle of governance, it’s an architectural and geographic rule as well.” It’s unclear who established such a rule, and in any case, the “principle” of the “separation of church and state” does not originate in the Constitution. Rather, its source is to be found in Matthew 22:21: “Render therefore unto Caesar the things which are Caesar’s; and unto God the things that are God’s.” We all carry a stock of mental habits and moral values, and a language with which to express them, that ultimately derives from the Bible, whether we have read it or not. The Museum of the Bible merely proposes that we read it. And for all its shortcomings and missed opportunities, and all its fits of cuteness (there’s a Manna Café), it does so with refreshing sincerity and surprising effectiveness.
1 The building has one passage of real brilliance. The entrance portal on Fourth Street is flanked by a pair of immense bronze panels, nearly 40 feet high, that call to mind Boaz and Jachin, the mighty bronze pillars that guarded Solomon’s Temple. In fact, they are panels of text inscribed with the opening lines of Genesis, as printed in the Gutenberg Bible of 1454, the first mass-produced book to use moveable metal type. The letters are reversed, confusingly, until one realizes that this aids in making souvenir rubbings that themselves embody the printing process. The genesis evoked here is that of universal literacy and the cultural transformation wrought by the printed book.
Choose your plan and pay nothing for six Weeks!
Review of '(((Semitism)))' By Jonathan Weisman
Now, two years later, Weisman has published a book about anti-Semitism—and, more specifically, about the supposedly grave threat to Jews springing from the alt-right and the Trump administration. (((Semitism))), for such is the book’s title, suffers from two grave ills. First, Weisman believes that political leftism and Judaism are identical. Second, he knows little or nothing about the political right, in whose camp he places the alt-right movement. Combine these two shortcomings with a heavy dose of self-regard, and you get (((Semitism))): a toxic brew of anti-Israel sentiment, bagels-and-lox cultural Jewishness, and unbridled hostility toward mainstream conservatism, which he lumps together with despicable alt-right anti-Semitism.
According to Weisman, Judaism derives its present-day importance from the way it provides a religious echo to secular leftism. This is his actual opening sentence: “The Jew flourishes when borders come down, when boundaries blur, when walls are destroyed, not erected.” Thus does he describe a people whose binding glue over the millennia is a faith tradition literally designed to separate its adherents from those who are not their co-religionists.
This ethnic-Jew-centric perspective leads Weisman to reject not merely Jewish observance, which he finds parochial and divisive, but the tie between Judaism and Israel, which he subtly titles “The Israel Deception.” He laments: “The American Jewish obsession with Israel has taken our eyes off not only the politics of our own country, the growing gulf between rich and poor, and the rising tide of nationalism but also our own grounding in faith.” He sneers at Jews who promote the “tried and true theme of the little Israeli David squaring off against the giant Arab Goliath.” Weisman believes, like John Mearsheimer and Stephen Walt, that members of both parties are guilty of “kissing the ring” at AIPAC, of “turn[ing] to mush when the subject was Israel.” In fact, Weisman says, the anti-Semitic BDS movement on college campuses “is worrisome as much for what it says about the American Jew’s inextricable links to Israel as for what it says about anti-Semitism.” In his view, “Barack Obama was the apotheosis of liberal internationalism.…The Jew thrived.”
Thus Weisman has this to say about his infamous Iran-deal chart: “I had my own brush with fratricidal Jew-on-Jew violence during that heated debate.” Was Weisman attacked? Assaulted? No, he received some nasty notes in response to running a chart. Weisman says he found the uproar “absurd” and laments that he is “still hearing about it.” Poor lamb.W eisman gets it right when he writes about the mainstreaming of the alt-right—the winking and nodding from Breitbart News and Donald Trump himself, the willingness of many in the mainstream to reward alt-right popularizers like Milo Yiannopoulos. (I left Breitbart in March 2016 due to differences regarding our coverage of the presidential campaign). Weisman is at his best when describing the origins of the alt-right and their infiltration of more well-read outlets.
But he can’t stop there. Instead, he seeks to impute the alt-right to the entire conservative movement and builds, Hillary Clinton–style, a fictitious basket of deplorables amounting to half the conservative movement. He cites “Christian fundamentalist” Israel supporters, to whom he wrongly attributes universally apocalyptic End of Times motivation. He condemns anti-immigration advocates, whose opposition to importation of un-vetted Muslim refugees he likens to anti-Semitic anti-immigrant movements of years past. He reviles “anti-feminists,” those who oppose political correctness in video games, Republican Jewish Coalition members who laughed at Trump making a Jewish joke, and free-speech advocates supposedly engaged in “forcible seizure of the free-speech movement” (a weird charge to level, considering that it cost Berkeley $600,000 to prevent Antifa from burning down the campus when I visited). In other words, pretty much anyone who didn’t vote for Hillary Clinton gets smeared with the alt-right brush, outside of those specifically targeted by the alt-right.
The problem of alt-right anti-Semitism, Weisman thinks, is just a problem of anti-leftism. If we could all just give money to the notoriously left-wing propaganda-pushing Southern Poverty Law Center, watch Trump-referential productions of Eugene Ionesco’s Rhinoceros at the Edinburgh National Festival (yes, this is in the book, and no, it is not parody), ignore anti-Semitic attacks at the Chicago Dyke March (I am not making this up), slap some vinyl signs on synagogues (no, I am still not making this up), and “not get too self-congratulatory” (seriously, guys, this is all real), all will be well. In the end, Weisman’s goal is to build a coalition of ethnic and political groups, cobbled together in common cause against conservatives—conservatives, he says, who represent the alt-right support base.
As the alt-right’s chief journalistic target in 2016, I’m always happy to see them clubbed like a baby seal. And there is a good book to be written about the alt-right. At times, Weisman borders on it, particularly when he seeks to investigate the bizarre relationship between Trump and the trolls who worship him.
But Weisman’s ardent allegiance to leftism leads him to misdiagnose the problem, to ignore the rising anti-Semitism of his own side (the DNC nearly elected anti-Semite Keith Ellison its leader last year), to prescribe the wrong solutions, and, most of all, to react in knee-jerk fashion to the alt-right by flattering himself as the epitome of everything the alt-right hates. Thin as the paper it was printed on, (((Semitism))) is a failure of imagination.
Choose your plan and pay nothing for six Weeks!
Review of 'The People vs. Democracy' By Yascha Mounk
The save-democracy writers have generally taken two tacks in answering it. Some see a simple replay of the previous century: The West’s authoritarian spirit has resurfaced, they say, and seduced the multitudes once more. It is up to heroic liberals to fight back, as their forebears in the 1940s did. But others have tried to trace today’s crack-up to liberal missteps or even to flaws in the liberal-democratic idea. This is a more useful avenue for those of us concerned with the preservation of self-government.
Yascha Mounk’s The People vs. Democracy wants to be the latter kind of (subtle, thoughtful) book but too often ends up making the cruder arguments of the former. The author, a lecturer on government at Harvard, argues that while liberals took liberalism’s permanence for granted, voters became “fed up with liberal democracy itself.” Elections across the developed world, in which fringe characters and populists routed mainstream establishments, provide the main evidence. Mounk has also collected mountains of public-opinion data, mainly from the World Values Survey, which shows a deeper transformation: People in the U.S. and Europe increasingly reject democratic principles and even hanker for strongman authority.
Fewer than a third of U.S. millennials “consider it essential to live in a democracy.” One out of 4 believes that democracy is a bad form of government. One-third of Americans of all ages now favor some sort of strongman rule, without checks and balances, and 1 of 6 would prefer the strongman to don a military uniform. Similarly, a third of German respondents and an astonishing half of those from Britain and France support strongman rule. Parties of the far right and far left are rapidly expanding their appeal, particularly among young people. There are many more depressing statistics of the kind, presented in numerous charts and graphs throughout.
Mounk thinks there are two factors at play in these attitudes. The first is the emergence of illiberal democracy, or “democracy without rights,” as a serious rival to the current order. Vladimir Putin in Russia, Recep Tayyip Erdogan in Turkey, Narenda Modi in India, and Viktor Orbán in Hungary, among others, exemplify this model. Once elected, these leaders chip away at individual rights and independent institutions until democracy is all but hollowed out and it becomes nigh impossible to remove the ruling party from office. Mounk strongly suspects that the Trump administration plans to pull something like this on the American public, though thus far the president’s illiberal bluster has proved to be just that.
The second factor is undemocratic liberalism, or “rights without democracy.” Here Mounk has in mind technocratic liberalism’s drive to remove an ever-growing share of policy decisions from the purview of voters and their elected representatives. This has been necessitated by the complexity of contemporary problems such as climate change and international trade, Mounk contends. Yet rights without democracy has generated mistrust and cynicism. Liberals, he says, should aim to “strike a better balance between expertise and responsiveness to the popular will.”
Mounk’s sections on the damage wrought by undemocratic liberalism should be instructive to his fellow liberals. But conservatives have for years stamped their feet and pulled their hair over the same phenomenon, only to be ignored by elite liberals on both sides of the Atlantic. Right-of-center readers might be forgiven for sarcastically muttering “no kidding” as Mounk takes them on a guided tour of liberal folly.
Conservatives have been warning about administrative bloat, for example, since at least the first half of the 20th century. It turns out that they had a point. Writes Mounk: “The job of legislating has been supplanted by so-called ‘independent agencies’ that can formulate policy on their own and are remarkably free from oversight.” Ditto activist judges: “The best studies of the Supreme Court do suggest that its role is far larger than it was when the Constitution was written.” And ditto the European Union’s democratic deficit: “To create a truly ‘single market,’ the EU has introduced far-reaching limitations” on state sovereignty.
He also strikes upon the idea that nations really are different from one another, and in politically significant ways. “After a few months living in England,” the German-born author confesses, “I began to recognize that the differences between British and German culture were much deeper than I imagined.” No kidding. What about the anti-Western monoculture that lords over most college campuses? Here, too, the right was on to something. “Far from seeking to preserve the most valuable aspects of our political system,” Mounk writes, liberal academe’s “overriding objective is, all too often, to help students recognize its manifold injustices and hypocrisies.”
Mounk’s discovery of these core conservative insights, however, doesn’t spur a rethink of his reflexive disdain for conservatives. This is most apparent in his coverage of American politics. The book is supposed to be a battle cry for democracy to rally left and right alike. Yet, with few exceptions, conservatives and Republicans are cast as cynical operators who rely on underhanded tactics and coded racism to undermine democracy and ultimately abet the populists. (Hillary Clinton and Barack Obama receive adulatory treatment.)
He describes Senate Majority Leader Mitch McConnell’s refusal to hold hearings for Merrick Garland, Obama’s final Supreme Court nominee, and GOP filibustering of Democratic legislation as “abuse[s] of constitutional norms” (they weren’t). But he pooh-poohs popular outrage at Clinton’s unlawful use of a private email server and elides the Obama Internal Revenue Service’s selective targeting of conservative nonprofits ahead of the 2012 election.
He also underestimates a third development of recent years—liberal illiberalism (my term, not his)—a liberalism that not only lacks democratic legitimacy but seeks to destroy, in the name of tolerance, the fundamental rights of those who stand in the way of full-spectrum progressivism. This is the kind of liberalism that compels nuns to pay for contraceptives and evangelical bakers to bake gay-wedding cakes, silences conservative speakers on campus, and denounces sushi restaurants as “cultural appropriation.”
Mounk isn’t ignorant of these tendencies, and he wants liberals to ease up (a bit). Yet, because he maintains that the censorious left’s heart is in the right place, he can’t seem to reach the necessary conclusion: that much illiberalism today comes, not from the right, but from ostensibly liberal quarters, and that this says something about the nature of contemporary liberal ideology. The true illiberal villains, for Mounk, are only ever the Modis, Trumps, and Orbáns—plus the troglodytes down South. Well-intentioned liberals who back censorship, he writes at one point, “ignore what would happen if the dean of Southern Baptist University…were to gain the right to censor utterances” he dislikes.
In fact, there is no such institution as “Southern Baptist University.” According to the most recent rankings from the Foundation for Individual Rights in Education, however, four of the 10 worst U.S. colleges for free speech last year were public schools located in blue states, while five were blue-state private or religious schools with longstanding reputations for progressivism (Mounk’s own Harvard among them).
His quickness to frame Southern Baptists as illiberal bogeys is telling and suggests that, for all its exhortations against liberal highhandedness, Mounk’s book comes from the same high-handed place. It colors the author’s approach to questions of nationalism and immigration that are at the heart of the current ferment. He concedes that liberal democracy is compatible with voter demand for limits on mass migration. But he can’t help but attribute those demands to irrational “resentment,” eschewing completely the—perfectly rational—fear of Islamist terrorism.
He sees the nation-state as an “imagined community” to which too many of our fellow citizens remain attached. Ideally for Mounk, the empire of rights and procedural norms would thrive independently of nationhood, civilizational barriers, and sacred communities. For now, he allows, liberals unfortunately have to contend with these anachronisms. His view is an improvement over the liberal transnationalism that is still committed to doing away borders altogether, even after the popular counterpunch of 2016. Still, why should Poles or Hungarians or Britons remain politically attached to Polish, Hungarian, or British democracy? What is it about Polishness as such that matters to Poland’s democratic character? Mounk has no answers.
No wonder, finally, that the author never satisfactorily links liberalism’s turn against democracy and the rise of illiberal democrats. He can never bring himself to say outright that the one (rights without democracy) is begetting the other (democracy without rights). Liberals, of the classical and the contemporary varieties, badly need a book that offers such uncomfortable reckonings. Yascha Mounk’s The People vs. Democracy is not it.