When the United States experienced the great increase in crime that began in the early 1960's and continued through the…
Few of the major problems facing American society today are entirely new, but in recent years most of them have either taken new forms or reached new levels of urgency. To make matters more difficult, in many cases the solutions formerly relied upon have proved to be ineffective, leaving us so frustrated that we seize desperately on proposals which promise much but deliver little.
In the hope of bringing greater clarity to the understanding of these problems, and of framing workable solutions and policies, we are inaugurating this new series of articles. Like James Q. Wilson’s below, each subsequent piece in the series will begin with a reexamination of a particular issue by a writer who has lived with and studied it for a long time and who will then proceed to suggest “What To Do About” it. Among those already scheduled for publication in the coming months are Charles Murray and Richard J. Herrnstein on welfare; Gertrude Himmelfarb on the universities; William J. Bennett on our children; Robert H. Bork on the First Amendment; and Richard Pipes on Russia.
James Q. Wilson, professor of management and public policy at UCLA, is the author of many books and articles on crime, including Thinking about Crime; Varieties of Police Behavior; and Crime and Human Nature (written with Richard J. Herrnstein). He is also the editor of Crime and Public Policy and co-editor, with Joan Petersilia, of Crime (forthcoming from ICS Press).
When the United States experienced the great increase in crime that began in the early 1960’s and continued through the 1970’s, most Americans were inclined to attribute it to conditions unique to this country. Many conservatives blamed it on judicial restraints on the police, the abandonment of capital punishment, and the mollycoddling of offenders; many liberals blamed it on poverty, racism, and the rise of violent television programs. Europeans, to the extent they noticed at all, referred to it, sadly or patronizingly, as the “American” problem, a product of our disorderly society, weak state, corrupt police, or imperfect welfare system.
Now, 30 years later, any serious discussion of crime must begin with the fact that, except for homicide, most industrialized nations have crime rates that resemble those in the United States. All the world is coming to look like America. In 1981, the burglary rate in Great Britain was much less than that in the United States; within six years the two rates were the same; today, British homes are more likely to be burgled than American ones. In 1980, the rate at which automobiles were stolen was lower in France than in the United States; today, the reverse is true. By 1984, the burglary rate in the Netherlands was nearly twice that in the United States. In Australia and Sweden certain forms of theft are more common than they are here. While property-crime rates were declining during most of the 1980’s in the United States, they were rising elsewhere.1
America, it is true, continues to lead the industrialized world in murders. There can be little doubt that part of this lead is to be explained by the greater availability of handguns here. Arguments that once might have been settled with insults or punches are today more likely to be settled by shootings. But guns are not the whole story. Big American cities have had more homicides than comparable European ones for almost as long as anyone can find records. New York and Philadelphia have been more murderous than London since the early part of the 19th century. This country has had a violent history; with respect to murder, that seems likely to remain the case.
But except for homicide, things have been getting better in the United States for over a decade. Since 1980, robbery rates (as reported in victim surveys) have declined by 15 percent. And even with regard to homicide, there is relatively good news: in 1990, the rate at which adults killed one another was no higher than it was in 1980, and in many cities it was considerably lower.
This is as it was supposed to be. Starting around 1980, two things happened that ought to have reduced most forms of crime. The first was the passing into middle age of the postwar baby boom. By 1990, there were 1.5 million fewer boys between the ages of fifteen and nineteen than there had been in 1980, a drop that meant that this youthful fraction of the population fell from 9.3 percent to 7.2 percent of the total.
In addition, the great increase in the size of the prison population, caused in part by the growing willingness of judges to send offenders to jail, meant that the dramatic reductions in the costs of crime to the criminal that occurred in the 1960’s and 1970’s were slowly (and very partially) being reversed. Until around 1985, this reversal involved almost exclusively real criminals and parole violators; it was not until after 1985 that more than a small part of the growth in prison populations was made up of drug offenders.
Because of the combined effect of fewer young people on the street and more offenders in prison, many scholars, myself included, predicted a continuing drop in crime rates throughout the 1980’s and into the early 1990’s. We were almost right: crime rates did decline. But suddenly, starting around 1985, even as adult homicide rates were remaining stable or dropping, youthful homicide rates shot up.
Alfred Blumstein of Carnegie-Mellon University has estimated that the rate at which young males, ages fourteen to seventeen, kill people has gone up significantly for whites and incredibly for blacks. Between 1985 and 1992, the homicide rate for young white males went up by about 50 percent but for young black males it tripled.
The public perception that today’s crime problem is different from and more serious than that of earlier decades is thus quite correct. Youngsters are shooting at people at a far higher rate than at any time in recent history. Since young people are more likely than adults to kill strangers (as opposed to lovers or spouses), the risk to innocent bystanders has gone up. There may be some comfort to be had in the fact that youthful homicides are only a small fraction of all killings, but given their randomness, it is not much solace.
The United States, then, does not have a crime problem, it has at least two. Our high (though now slightly declining) rates of property crime reflect a profound, worldwide cultural change: prosperity, freedom, and mobility have emancipated people almost everywhere from those ancient bonds of custom, family, and village that once held in check both some of our better and many of our worst impulses. The power of the state has been weakened, the status of children elevated, and the opportunity for adventure expanded; as a consequence, we have experienced an explosion of artistic creativity, entrepreneurial zeal, political experimentation—and criminal activity. A global economy has integrated the markets for clothes, music, automobiles—and drugs.
There are only two restraints on behavior—morality, enforced by individual conscience or social rebuke, and law, enforced by the police and the courts. If society is to maintain a behavioral equilibrium, any decline in the former must be matched by a rise in the latter (or vice versa). If familial and traditional restraints on wrongful behavior are eroded, it becomes necessary to increase the legal restraints. But the enlarged spirit of freedom and the heightened suspicion of the state have made it difficult or impossible to use the criminal-justice system to achieve what custom and morality once produced.
This is the modern dilemma, and it may be an insoluble one, at least for the West. The Islamic cultures of the Middle East and the Confucian cultures of the Far East believe that they have a solution. It involves allowing enough liberty for economic progress (albeit under general state direction) while reserving to the state, and its allied religion, nearly unfettered power over personal conduct. It is too soon to tell whether this formula—best exemplified by the prosperous but puritanical city-state of Singapore—will, in the long run, be able to achieve both reproducible affluence and intense social control.
Our other crime problem has to do with the kind of felonies we have: high levels of violence, especially youthful violence, often occurring as part of urban gang life, produced disproportionately by a large, alienated, and self-destructive underclass. This part of the crime problem, though not uniquely American, is more important here than in any other industrialized nation. Britons, Germans, and Swedes are upset about the insecurity of their property and uncertain about what response to make to its theft, but if Americans only had to worry about their homes being burgled and their autos stolen, I doubt that crime would be the national obsession it has now become.
Crime, we should recall, was not a major issue in the 1984 presidential election and had only begun to be one in the 1988 contest; by 1992, it was challenging the economy as a popular concern and today it dominates all other matters. The reason, I think, is that Americans believe something fundamental has changed in our patterns of crime. They are right. Though we were unhappy about having our property put at risk, we adapted with the aid of locks, alarms, and security guards. But we are terrified by the prospect of innocent people being gunned down at random, without warning and almost without motive, by youngsters who afterward show us the blank, unremorseful faces of seemingly feral, presocial beings.
Criminology has learned a great deal about who these people are. In studies both here and abroad it has been established that about 6 percent of the boys of a given age will commit half or more of all the serious crime produced by all boys of that age. Allowing for measurement errors, it is remarkable how consistent this formula is—6 percent causes 50 percent. It is roughly true in places as different as Philadelphia, London, Copenhagen, and Orange County, California.
We also have learned a lot about the characteristics of the 6 percent. They tend to have criminal parents, to live in cold or discordant families (or pseudo-families), to have a low verbal-intelligence quotient and to do poorly in school, to be emotionally cold and temperamentally impulsive, to abuse alcohol and drugs at the earliest opportunity, and to reside in poor, disorderly communities. They begin their misconduct at an early age, often by the time they are in the third grade.
These characteristics tend to be found not only among the criminals who get caught (and who might, owing to bad luck, be an unrepresentative sample of all high-rate offenders), but among those who do not get caught but reveal their behavior on questionnaires. And the same traits can be identified in advance among groups of randomly selected youngsters, long before they commit any serious crimes—not with enough precision to predict which individuals will commit crimes, but with enough accuracy to be a fair depiction of the group as a whole.2
Here a puzzle arises: if 6 percent of the males causes so large a fraction of our collective misery, and if young males are less numerous than once was the case, why are crime rates high and rising? The answer, I conjecture, is that the traits of the 6 percent put them at high risk for whatever criminogenic forces operate in society. As the costs of crime decline or the benefits increase; as drugs and guns become more available; as the glorification of violence becomes more commonplace; as families and neighborhoods lose some of their restraining power—as all these things happen, almost all of us will change our ways to some degree. For the most law-abiding among us, the change will be quite modest: a few more tools stolen from our employer, a few more traffic lights run when no police officer is watching, a few more experiments with fashionable drugs, and a few more business deals on which we cheat. But for the least law-abiding among us, the change will be dramatic: they will get drunk daily instead of just on Saturday night, try PCP or crack instead of marijuana, join gangs instead of marauding in pairs, and buy automatic weapons instead of making zip guns.
A metaphor: when children play the schoolyard game of crack-the-whip, the child at the head of the line scarcely moves but the child at the far end, racing to keep his footing, often stumbles and falls, hurled to the ground by the cumulative force of many smaller movements back along the line. When a changing culture escalates criminality, the at-risk boys are at the end of the line, and the conditions of American urban life—guns, drugs, automobiles, disorganized neighborhoods—make the line very long and the ground underfoot rough and treacherous.
Much is said these days about preventing or deterring crime, but it is important to understand exactly what we are up against when we try. Prevention, if it can be made to work at all, must start very early in life, perhaps as early as the first two or three years, and given the odds it faces—childhood impulsivity, low verbal facility, incompetent parenting, disorderly neighborhoods—it must also be massive in scope. Deterrence, if it can be made to work better (for surely it already works to some degree), must be applied close to the moment of the wrongful act or else the present-orientedness of the youthful would-be offender will discount the threat so much that the promise of even a small gain will outweigh its large but deferred costs.
In this country, however, and in most Western nations, we have profound misgivings about doing anything that would give prevention or deterrence a chance to make a large difference. The family is sacrosanct; the family-preservation movement is strong; the state is a clumsy alternative. “Crime-prevention” programs, therefore, usually take the form of creating summer jobs for adolescents, worrying about the unemployment rate, or (as in the proposed 1994 crime bill) funding midnight basketball leagues. There may be something to be said for all these efforts, but crime prevention is not one of them. The typical high-rate offender is well launched on his career before he becomes a teenager or has ever encountered the labor market; he may like basketball, but who pays for the lights and the ball is a matter of supreme indifference to him.
Prompt deterrence has much to recommend it: the folk wisdom that swift and certain punishment is more effective than severe penalties is almost surely correct. But the greater the swiftness and certainty, the less attention paid to the procedural safeguards essential to establishing guilt. As a result, despite their good instincts for the right answers, most Americans, frustrated by the restraints (many wise, some foolish) on swiftness and certainty, vote for proposals to increase severity: if the penalty is 10 years, let us make it 20 or 30; if the penalty is life imprisonment, let us make it death; if the penalty is jail, let us make it caning.
Yet the more draconian the sentence, the less (on the average) the chance of its being imposed; plea bargains see to that. And the most draconian sentences will, of necessity, tend to fall on adult offenders nearing the end of their criminal careers and not on the young ones who are in their criminally most productive years. (The peak ages of criminality are between sixteen and eighteen; the average age of prison inmates is ten years older.) I say “of necessity” because almost every judge will give first-, second-, or even third-time offenders a break, reserving the heaviest sentences for those men who have finally exhausted judicial patience or optimism.
Laws that say “three strikes and you’re out” are an effort to change this, but they suffer from an inherent contradiction. If they are carefully drawn so as to target only the most serious offenders, they will probably have a minimal impact on the crime rate; but if they are broadly drawn so as to make a big impact on the crime rate, they will catch many petty repeat offenders who few of us think really deserve life imprisonment.
Prevention and deterrence, albeit hard to augment, at least are plausible strategies. Not so with many of the other favorite nostrums, like reducing the amount of violence on television. Televised violence may have some impact on criminality, but I know of few scholars who think the effect is very large. And to achieve even a small difference we might have to turn the clock back to the kind of programming we had around 1945, because the few studies that correlate programming with the rise in violent crime find the biggest changes occurred between that year and 1974. Another favorite, boot camp, makes good copy, but so far no one has shown that it reduces the rate at which the former inmates commit crimes.
Then, of course, there is gun control. Guns are almost certainly contributors to the lethality of American violence, but there is no politically or legally feasible way to reduce the stock of guns now in private possession to the point where their availability to criminals would be much affected. And even if there were, law-abiding people would lose a means of protecting themselves long before criminals lost a means of attacking them.
As for rehabilitating juvenile offenders, it has some merit, but there are rather few success stories. Individually, the best (and best-evaluated) programs have minimal, if any, effects; collectively, the best estimate of the crime-reduction value of these programs is quite modest, something on the order of 5 or 10 percent.3
What, then, is to be done? Let us begin with policing, since law-enforcement officers are that part of the criminal-justice system which is closest to the situations where criminal activity is likely to occur.
It is now widely accepted that, however important it is for officers to drive around waiting for 911 calls summoning their help, doing that is not enough. As a supplement to such a reactive strategy—comprised of random preventive patrol and the investigation of crimes that have already occurred—many leaders and students of law enforcement now urge the police to be “proactive”: to identify, with the aid of citizen groups, problems that can be solved so as to prevent criminality, and not only to respond to it. This is often called community-based policing; it seems to entail something more than feel-good meetings with honest citizens, but something less than allowing neighborhoods to assume control of the police function.
The new strategy might better be called problem-oriented policing. It requires the police to engage in directed, not random, patrol. The goal of that direction should be to reduce, in a manner consistent with fundamental liberties, the opportunity for high-risk persons to do those things that increase the likelihood of their victimizing others.
For example, the police might stop and pat down persons whom they reasonably suspect may be carrying illegal guns.4 The Supreme Court has upheld such frisks when an officer observes “unusual conduct” leading him to conclude that “criminal activity may be afoot” on the part of a person who may be “armed and dangerous.” This is all rather vague, but it can be clarified in two ways.
First, statutes can be enacted that make certain persons, on the basis of their past conduct and present legal status, subject to pat-downs for weapons. The statutes can, as is now the case in several states, make all probationers and parolees subject to nonconsensual searches for weapons as a condition of their remaining on probation or parole. Since three-fourths of all convicted offenders (and a large fraction of all felons) are in the community rather than in prison, there are on any given day over three million criminals on the streets under correctional supervision. Many are likely to become recidivists. Keeping them from carrying weapons will materially reduce the chances that they will rob or kill. The courts might also declare certain dangerous street gangs to be continuing criminal enterprises, membership in which constitutes grounds for police frisks.
Second, since I first proposed such a strategy, I have learned that there are efforts under way in public and private research laboratories to develop technologies that will permit the police to detect from a distance persons who are carrying concealed weapons on the streets. Should these efforts bear fruit, they will provide the police with the grounds for stopping, questioning, and patting down even persons not on probation or parole or obviously in gangs.
Whether or not the technology works, the police can also offer immediate cash rewards to people who provide information about individuals illegally carrying weapons. Spending $100 on each good tip will have a bigger impact on dangerous gun use than will the same amount spent on another popular nostrum—buying back guns from law-abiding people.5
Getting illegal firearms off the streets will require that the police be motivated to do all of these things. But if the legal, technological, and motivational issues can be resolved, our streets can be made safer even without sending many more people to prison.
The same directed-patrol strategy might help keep known offenders drug-free. Most persons jailed in big cities are found to have been using illegal drugs within the day or two preceding their arrest. When convicted, some are given probation on condition that they enter drug-treatment programs; others are sent to prisons where (if they are lucky) drug-treatment programs operate. But in many cities the enforcement of such probation conditions is casual or nonexistent; in many states, parolees are released back into drug-infested communities with little effort to ensure that they participate in whatever treatment programs are to be found there.
Almost everyone agrees that more treatment programs should exist. But what many advocates overlook is that the key to success is steadfast participation and many, probably most, offenders have no incentive to be steadfast. To cope with this, patrol officers could enforce random drug tests on probationers and parolees on their beats; failing to take a test when ordered, or failing the test when taken, should be grounds for immediate revocation of probation or parole, at least for a brief period of confinement.
The goal of this tactic is not simply to keep offenders drug-free (and thereby lessen their incentive to steal the money needed to buy drugs and reduce their likelihood of committing crimes because they are on a drug high); it is also to diminish the demand for drugs generally and thus the size of the drug market.
Lest the reader embrace this idea too quickly, let me add that as yet we have no good reason to think that it will reduce the crime rate by very much. Something akin to this strategy, albeit one using probation instead of police officers, has been tried under the name of “intensive-supervision programs” (ISP), involving a panoply of drug tests, house arrests, frequent surveillance, and careful records. By means of a set of randomized experiments carried out in fourteen cities, Joan Petersilia and Susan Turner, both then at RAND, compared the rearrest rates of offenders assigned to ISP with those of offenders in ordinary probation. There was no difference.
Still, this study does not settle the matter. For one thing, since the ISP participants were under much closer surveillance than the regular probationers, the former were bound to be caught breaking the law more frequently than the latter. It is thus possible that a higher fraction of the crimes committed by the ISP than of the control group were detected and resulted in a return to prison, which would mean, if true, a net gain in public safety. For another thing, “intensive” supervision was in many cases not all that intensive—in five cities, contacts with the probationers only took place about once a week, and for all cities drug tests occurred, on average, about once a month. Finally, there is some indication that participation in treatment programs was associated with lower recidivism rates.
Both anti-gun and anti-drug police patrols will, if performed systematically, require big changes in police and court procedures and a significant increase in the resources devoted to both, at least in the short run. (ISP is not cheap, and it will become even more expensive if it is done in a truly intensive fashion.) Most officers have at present no incentive to search for guns or enforce drug tests; many jurisdictions, owing to crowded dockets or overcrowded jails, are lax about enforcing the conditions of probation or parole. The result is that the one group of high-risk people over which society already has the legal right to exercise substantial control is often out of control, “supervised,” if at all, by means of brief monthly interviews with overworked probation or parole officers.
Another promising tactic is to enforce truancy and curfew laws. This arises from the fact that much crime is opportunistic: idle boys, usually in small groups, sometimes find irresistible the opportunity to steal or the challenge to fight. Deterring present-oriented youngsters who want to appear fearless in the eyes of their comrades while indulging their thrill-seeking natures is a tall order. While it is possible to deter the crimes they commit by a credible threat of prompt sanctions, it is easier to reduce the chances for risky group idleness in the first place.
In Charleston, South Carolina, for example, Chief Reuben Greenberg instructed his officers to return all school-age children to the schools from which they were truant and to return all youngsters violating an evening-curfew agreement to their parents. As a result, groups of school-age children were no longer to be found hanging out in the shopping malls or wandering the streets late at night.
There has been no careful evaluation of these efforts in Charleston (or, so far as I am aware, in any other big city), but the rough figures are impressive—the Charleston crime rate in 1991 was about 25 percent lower than the rate in South Carolina’s other principal cities and, for most offenses (including burglaries and larcenies), lower than what that city reported twenty years earlier.
All these tactics have in common putting the police, as the criminologist Lawrence Sherman of the University of Maryland phrases it, where the “hot spots” are. Most people need no police attention except for a response to their calls for help. A small fraction of people (and places) need constant attention. Thus, in Minneapolis, all of the robberies during one year occurred at just 2 percent of the city’s addresses. To capitalize on this fact, the Minneapolis police began devoting extra patrol attention, in brief but frequent bursts of activity, to those locations known to be trouble spots. Robbery rates evidently fell by as much as 20 percent and public disturbances by even more.
Some of the worst hot spots are outdoor drug markets. Because of either limited resources, a fear of potential corruption, or a desire to catch only the drug kingpins, the police in some cities (including, from time to time, New York) neglect street-corner dealing. By doing so, they get the worst of all worlds.
The public, seeing the police ignore drug dealing that is in plain view, assumes that they are corrupt whether or not they are. The drug kingpins, who are hard to catch and are easily replaced by rival smugglers, find that their essential retail distribution system remains intact. Casual or first-time drug users, who might not use at all if access to supplies were difficult, find access to be effortless and so increase their consumption. People who might remain in treatment programs if drugs were hard to get drop out upon learning that they are easy to get. Interdicting without merely displacing drug markets is difficult but not impossible, though it requires motivation which some departments lack and resources which many do not have.
The sheer number of police on the streets of a city probably has only a weak, if any, relationship with the crime rate; what the police do is more important than how many there are, at least above some minimum level. Nevertheless, patrols directed at hot spots, loitering truants, late-night wanderers, probationers, parolees, and possible gun carriers, all in addition to routine investigative activities, will require more officers in many cities. Between 1977 and 1987, the number of police officers declined in a third of the 50 largest cities and fell relative to population in many more. Just how far behind police resources have lagged can be gauged from this fact: in 1950 there was one violent crime reported for every police officer; in 1980 there were three violent crimes reported for every officer.
I have said little so far about penal policy, in part because I wish to focus attention on those things that are likely to have the largest and most immediate impact on the quality of urban life. But given the vast gulf between what the public believes and what many experts argue should be our penal policy, a few comments are essential.
The public wants more people sent away for longer sentences; many (probably most) criminologists think we use prison too much and at too great a cost and that this excessive use has had little beneficial effect on the crime rate. My views are much closer to those of the public, though I think the average person exaggerates the faults of the present system and the gains of some alternative (such as “three strikes and you’re out”).
The expert view, as it is expressed in countless op-ed essays, often goes like this: “We have been arresting more and more people and giving them longer and longer sentences, producing no decrease in crime but huge increases in prison populations. As a result, we have become the most punitive nation on earth.”
Scarcely a phrase in those sentences is accurate. The probability of being arrested for a given crime is lower today than it was in 1974. The amount of time served in state prison has been declining more or less steadily since the 1940’s. Taking all crimes together, time served fell from 25 months in 1945 to 13 months in 1984. Only for rape are prisoners serving as much time today as they did in the 40’s.
The net effect of lower arrest rates and shorter effective sentences is that the cost to the adult perpetrator of the average burglary fell from 50 days in 1960 to 15 days in 1980. That is to say, the chances of being caught and convicted, multiplied by the median time served if imprisoned, was in 1980 less than a third of what it had been in 1960.6
Beginning around 1980, the costs of crime to the criminal began to inch up again—the result, chiefly, of an increase in the proportion of convicted persons who were given prison terms. By 1986, the “price” of a given burglary had risen to 21 days. Also beginning around 1980, as I noted at the outset, the crime rate began to decline.
It would be foolhardy to explain this drop in crime by the rise in imprisonment rates; many other factors, such as the aging of the population and the self-protective measures of potential victims, were also at work. Only a controlled experiment (for example, randomly allocating prison terms for a given crime among the states) could hope to untangle the causal patterns, and happily the Constitution makes such experiments unlikely.
Yet it is worth noting that nations with different penal policies have experienced different crime rates. According to David Farrington of Cambridge University, property-crime rates rose in England and Sweden at a time when both the imprisonment rate and time served fell substantially, while property-crime rates declined in the United States at a time when the imprisonment rate (but not time served) was increasing.
Though one cannot measure the effect of prison on crime with any accuracy, it certainly has some effects. By 1986, there were 55,000 more robbers in prison than there had been in 1974. Assume that each imprisoned robber would commit five such offenses per year if free on the street. This means that in 1986 there were 275,000 fewer robberies in America than there would have been had these 55,000 men been left on the street.
Nor, finally, does America use prison to a degree that vastly exceeds what is found in any other civilized nation. Compare the chance of going to prison in England and the United States if one is convicted of a given crime. According to Farrington, your chances were higher in England if you were found guilty of a rape, higher in America if you were convicted of an assault or a burglary, and about the same if you were convicted of a homicide or a robbery. Once in prison, you would serve a longer time in this country than in England for almost all offenses save murder.
James Lynch of American University has reached similar conclusions from his comparative study of criminal-justice policies. His data show that the chances of going to prison and the time served for homicide and robbery are roughly the same in the United States, Canada, and England.
Of late, drugs have changed American penal practice. In 1982, only about 8 percent of state-prison inmates were serving time on drug convictions. In 1987, that started to increase sharply; by 1994, over 60 percent of all federal and about 25 percent of all state prisoners were there on drug charges. In some states, such as New York, the percentage was even higher.
This change can be attributed largely to the advent of crack cocaine. Whereas snorted cocaine powder was expensive, crack was cheap; whereas the former was distributed through networks catering to elite tastes, the latter was mass-marketed on street corners. People were rightly fearful of what crack was doing to their children and demanded action; as a result, crack dealers started going to prison in record numbers.
Unfortunately, these penalties do not have the same incapacitative effect as sentences for robbery. A robber taken off the street is not replaced by a new robber who has suddenly found a market niche, but a drug dealer sent away is replaced by a new one because an opportunity has opened up.
We are left, then, with the problem of reducing the demand for drugs, and that in turn requires either prevention programs on a scale heretofore unimagined or treatment programs with a level of effectiveness heretofore unachieved. Any big gains in prevention and treatment will probably have to await further basic research into the biochemistry of addiction and the development of effective and attractive drug antagonists that reduce the appeal of cocaine and similar substances.7
In the meantime, it is necessary either to build much more prison space, find some other way of disciplining drug offenders, or both. There is very little to be gained, I think, from shortening the terms of existing non-drug inmates in order to free up more prison space. Except for a few elderly, nonviolent offenders serving very long terms, there are real risks associated with shortening the terms of the typical inmate.
Scholars disagree about the magnitude of those risks, but the best studies, such as the one of Wisconsin inmates done by John Dilulio of Princeton, suggest that the annual costs to society in crime committed by an offender on the street are probably twice the costs of putting him in a cell. That ratio will vary from state to state because states differ in what proportion of convicted persons is imprisoned—some states dip deeper down into the pool of convictees, thereby imprisoning some with minor criminal habits.
But I caution the reader to understand that there are no easy prison solutions to crime, even if we build the additional space. The state-prison population more than doubled between 1980 and 1990, yet the victimization rate for robbery fell by only 23 percent. Even if we assign all of that gain to the increased deterrent and incapacitative effect of prison, which is implausible, the improvement is not vast. Of course, it is possible that the victimization rate would have risen, perhaps by a large amount, instead of falling if we had not increased the number of inmates. But we shall never know.
Recall my discussion of the decline in the costs of crime to the criminal, measured by the number of days in prison that result, on average, from the commission of a given crime. That cost is vastly lower today than in the 1950’s. But much of the decline (and since 1974, nearly all of it) is the result of a drop in the probability of being arrested for a crime, not in the probability of being imprisoned once arrested.
Anyone who has followed my writings on crime knows that I have defended the use of prison both to deter crime and incapacitate criminals. I continue to defend it. But we must recognize two facts. First, even modest additional reductions in crime, comparable to the ones achieved in the early 1980’s, will require vast increases in correctional costs and encounter bitter judicial resistance to mandatory sentencing laws. Second, America’s most troubling crime problem—the increasingly violent behavior of disaffected and impulsive youth—may be especially hard to control by means of marginal and delayed increases in the probability of punishment.
Possibly one can make larger gains by turning our attention to the unexplored area of juvenile justice. Juvenile (or family) courts deal with young people just starting their criminal careers and with chronic offenders when they are often at their peak years of offending. We know rather little about how these courts work or with what effect. There are few, if any, careful studies of what happens, a result in part of scholarly neglect and in part of the practice in some states of shrouding juvenile records and proceedings in secrecy. Some studies, such as one by the Los Angeles Times of juvenile justice in California, suggest that young people found guilty of a serious crime are given sentences tougher than those meted out to adults.8 This finding is so counter to popular beliefs and the testimony of many big-city juvenile-court judges that some caution is required in interpreting it.
There are two problems. The first lies in defining the universe of people to whom sanctions are applied. In some states, such as California, it may well be the case that a juvenile found guilty of a serious offense is punished with greater rigor than an adult, but many juveniles whose behavior ought to be taken seriously (because they show signs of being part of the 6 percent) are released by the police or probation officers before ever seeing a judge. And in some states, such as New York, juveniles charged with having committed certain crimes, including serious ones like illegally carrying a loaded gun or committing an assault, may not be fingerprinted. Since persons with a prior record are usually given longer sentences than those without one, the failure to fingerprint can mean that the court has no way of knowing whether the John Smith standing before it is the same John Smith who was arrested four times for assault and so ought to be sent away, or a different John Smith whose clean record entitles him to probation.
The second problem arises from the definition of a “severe” penalty. In California, a juvenile found guilty of murder does indeed serve a longer sentence than an adult convicted of the same offense—60 months for the former, 41 months for the latter. Many people will be puzzled by a newspaper account that defines five years in prison for murder as a “severe” sentence, and angered to learn that an adult serves less than four years for such a crime.
The key, unanswered question is whether prompt and more effective early intervention would stop high-rate delinquents from becoming high-rate criminals at a time when their offenses were not yet too serious. Perhaps early and swift, though not necessarily severe, sanctions could deter some budding hoodlums, but we have no evidence of that as yet.
For as long as I can remember, the debate over crime has been between those who wished to rely on the criminal-justice system and those who wished to attack the root causes of crime. I have always been in the former group because what its opponents depicted as “root causes”—unemployment, racism, poor housing, too little schooling, a lack of self-esteem—turned out, on close examination, not to be major causes of crime at all.
Of late, however, there has been a shift in the debate. Increasingly those who want to attack root causes have begun to point to real ones—temperament, early family experiences, and neighborhood effects. The sketch I gave earlier of the typical high-rate young offender suggests that these factors are indeed at the root of crime. The problem now is to decide whether any can be changed by plan and at an acceptable price in money and personal freedom.
If we are to do this, we must confront the fact that the critical years of a child’s life are ages one to ten, with perhaps the most important being the earliest years. During those years, some children are put gravely at risk by some combination of heritable traits, prenatal insults (maternal drug and alcohol abuse or poor diet), weak parent-child attachment, poor supervision, and disorderly family environment.
If we knew with reasonable confidence which children were most seriously at risk, we might intervene with some precision to supply either medical therapy or parent training or (in extreme cases) to remove the child to a better home. But given our present knowledge, precision is impossible, and so we must proceed carefully, relying, except in the most extreme cases, on persuasion and incentives.
We do, however, know enough about the early causes of conduct disorder and later delinquency to know that the more risk factors exist (such as parental criminality and poor supervision), the greater the peril to the child. It follows that programs aimed at just one or a few factors are not likely to be successful; the children most at risk are those who require the most wide-ranging and fundamental changes in their life circumstances. The goal of these changes is, as Travis Hirschi of the University of Arizona has put it, to teach self-control.
Hirokazu Yoshikawa of New York University has recently summarized what we have learned about programs that attempt to make large and lasting changes in a child’s prospects for improved conduct, better school behavior, and lessened delinquency. Four such programs in particular seemed valuable—the Perry Preschool Project in Ypsilanti, Michigan; the Parent-Child Development Center in Houston, Texas; the Family Development Research Project in Syracuse, New York; and the Yale Child Welfare Project in New Haven, Connecticut.
All these programs had certain features in common. They dealt with low-income, often minority, families; they intervened during the first five years of a child’s life and continued for between two and five years; they combined parent training with preschool education for the child; and they involved extensive home visits. All were evaluated fairly carefully, with the follow-ups lasting for at least five years, in two cases for at least ten, and in one case for fourteen. The programs produced (depending on the project) less fighting, impulsivity, disobedience, restlessness, cheating, and delinquency. In short, they improved self-control.
They were experimental programs, which means that it is hard to be confident that trying the same thing on a bigger scale in many places will produce the same effects. A large number of well-trained and highly motivated caseworkers dealt with a relatively small number of families, with the workers knowing that their efforts were being evaluated. Moreover, the programs operated in the late 1970’s or early 1980’s before the advent of crack cocaine or the rise of the more lethal neighborhood gangs. A national program mounted under current conditions might or might not have the same result as the experimental efforts.
Try telling that to lawmakers. What happens when politicians encounter experimental successes is amply revealed by the history of Head Start: they expanded the program quickly without assuring quality, and stripped it down to the part that was the most popular, least expensive, and easiest to run, namely, preschool education. Absent from much of Head Start are the high teacher-to-child case loads, the extensive home visits, and the elaborate parent training—the very things that probably account for much of the success of the four experimental programs.
In this country we tend to separate programs designed to help children from those that benefit their parents. The former are called “child development,” the latter “welfare reform.” This is a great mistake. Everything we know about long-term welfare recipients indicates that their children are at risk for the very problems that child-helping programs later try to correct.
The evidence from a variety of studies is quite clear: even if we hold income and ethnicity constant, children (and especially boys) raised by a single mother are more likely than those raised by two parents to have difficulty in school, get in trouble with the law, and experience emotional and physical problems.9 Producing illegitimate children is not an “alternative life-style” or simply an imprudent action; it is a curse. Making mothers work will not end the curse; under current proposals, it will not even save money.
The absurdity of divorcing the welfare problem from the child-development problem becomes evident as soon as we think seriously about what we want to achieve. Smaller welfare expenditures? Well, yes, but not if it hurts children. More young mothers working? Probably not; young mothers ought to raise their young children, and work interferes with that unless two parents can solve some difficult and expensive problems.
What we really want is fewer illegitimate children, because such children, by being born out of wedlock are, except in unusual cases, being given early admission to the underclass. And failing that, we want the children born to single (and typically young and poor) mothers to have a chance at a decent life.
Letting teenage girls set up their own households at public expense neither discourages illegitimacy nor serves the child’s best interests. If they do set up their own homes, then to reach those with the fewest parenting skills and the most difficult children will require the kind of expensive and intensive home visits and family-support programs characteristic of the four successful experiments mentioned earlier.
One alternative is to tell a girl who applies for welfare that she can only receive it on condition that she live either in the home of two competent parents (her own if she comes from an intact family) or in a group home where competent supervision and parent training will be provided by adults unrelated to her. Such homes would be privately managed but publicly funded by pooling welfare checks, food stamps, and housing allowances.
A model for such a group home (albeit one run without public funds) is the St. Martin de Porres House of Hope on the south side of Chicago, founded by two nuns for homeless young women, especially those with drug-abuse problems. The goals of the home are clear: accept personal responsibility for your lives and learn to care for your children. And these goals, in turn, require the girls to follow rules, stay in school, obey a curfew, and avoid alcohol and drugs. Those are the rules that ought to govern a group home for young welfare mothers.
Group homes funded by pooled welfare benefits would make the task of parent training much easier and provide the kind of structured, consistent, and nurturant environment that children need. A few cases might be too difficult for these homes, and for such children, boarding schools—once common in American cities for disadvantaged children, but now almost extinct—might be revived.
Group homes also make it easier to supply quality medical care to young mothers and their children. Such care has taken on added importance in recent years with discovery of the lasting damage that can be done to a child’s prospects from being born prematurely and with a very low birth weight, having a mother who has abused drugs or alcohol, or being exposed to certain dangerous metals. Lead poisoning is now widely acknowledged to be a source of cognitive and behavioral impairment; of late, elevated levels of manganese have been linked to high levels of violence.10 These are all treatable conditions; in the case of a manganese imbalance, easily treatable.
My focus on changing behavior will annoy some readers. For them the problem is poverty and the worst feature of single-parent families is that they are inordinately poor. Even to refer to a behavioral or cultural problem is to “stigmatize” people.
Indeed it is. Wrong behavior—neglectful, immature, or incompetent parenting; the production of out-of-wedlock babies—ought to be stigmatized. There are many poor men of all races who do not abandon the women they have impregnated, and many poor women of all races who avoid drugs and do a good job of raising their children. If we fail to stigmatize those who give way to temptation, we withdraw the rewards from those who resist them. This becomes all the more important when entire communities, and not just isolated households, are dominated by a culture of fatherless boys preying on innocent persons and exploiting immature girls.
We need not merely stigmatize, however. We can try harder to move children out of those communities, either by drawing them into safe group homes or facilitating (through rent supplements and housing vouchers) the relocation of them and their parents to neighborhoods with intact social structures and an ethos of family values.
Much of our uniquely American crime problem (as opposed to the worldwide problem of general thievery) arises, not from the failings of individuals but from the concentration in disorderly neighborhoods of people at risk of failing. That concentration is partly the result of prosperity and freedom (functioning families long ago seized the opportunity to move out to the periphery), partly the result of racism (it is harder for some groups to move than for others), and partly the result of politics (elected officials do not wish to see settled constituencies broken up).
I seriously doubt that this country has the will to address either of its two crime problems, save by acts of individual self-protection. We could in theory make justice swifter and more certain, but we will not accept the restrictions on liberty and the weakening of procedural safeguards that this would entail. We could vastly improve the way in which our streets are policed, but some of us will not pay for it and the rest of us will not tolerate it. We could alter the way in which at-risk children experience the first few years of life, but the opponents of this—welfare-rights activists, family preservationists, budget cutters, and assorted ideologues—are numerous and the bureaucratic problems enormous.
Unable or unwilling to do such things, we take refuge in substitutes: we debate the death penalty, we wring our hands over television, we lobby to keep prisons from being built in our neighborhoods, and we fall briefly in love with trendy nostrums that seem to cost little and promise much.
Much of our ambivalence is on display in the 1994 federal crime bill. To satisfy the tough-minded, the list of federal offenses for which the death penalty can be imposed has been greatly enlarged, but there is little reason to think that executions, as they work in this country (which is to say, after much delay and only on a few offenders), have any effect on the crime rate and no reason to think that executing more federal prisoners (who account, at best, for a tiny fraction of all homicides) will reduce the murder rate. To satisfy the tender-minded, several billion dollars are earmarked for prevention programs, but there is as yet very little hard evidence that any of these will actually prevent crime.
In adding more police officers, the bill may make some difference—but only if the additional personnel are imaginatively deployed. And Washington will pay only part of the cost initially and none of it after six years, which means that any city getting new officers will either have to raise its own taxes to keep them on the force or accept the political heat that will arise from turning down “free” cops. Many states also desperately need additional prison space; the federal funds allocated by the bill for their construction will be welcomed, provided that states are willing to meet the conditions set for access to such funds.
Meanwhile, just beyond the horizon, there lurks a cloud that the winds will soon bring over us. The population will start getting younger again. By the end of this decade there will be a million more people between the ages of fourteen and seventeen than there are now. Half of this extra million will be male. Six percent of them will become high-rate, repeat offenders—30,000 more muggers, killers, and thieves than we have now.
1 These comparisons depend on official police statistics. There are of course errors in such data. But essentially the same pattern emerges from comparing nations on the basis of victimization surveys.
3 Many individual programs involve so few subjects that a good evaluation will reveal no positive effect even if one occurs. By a technique called meta-analysis, scores of individual studies can be pooled into one mega-evaluation; because there are now hundreds or thousands of subjects, even small gains can be identified. The best of these meta-analyses, such as the one by Mark Lipsey, suggest modest positive effects.
5 In Charleston, South Carolina, the police pay a reward to anyone identifying a student carrying a weapon to school or to some school event. Because many boys carry guns to school in order to display or brag about them, the motive to carry disappears once any display alerts a potential informer.
6 I take these cost calculations from Mark Kleiman, et al., “Imprisonment-to-Offense Ratios,” Working Paper 89-06-02 of the Program in Criminal Justice Policy and Management at the Kennedy School of Government, Harvard University (August 5, 1988).
7 I anticipate that at this point some readers will call for legalizing or decriminalizing drugs as the “solution” to the problem. Before telling me this, I hope they will read what I wrote on that subject in the February 1990 issue of COMMENTARY. I have not changed my mind.
10 It is not clear why manganese has this effect, but we know that it diminishes the availability of a precursor of serotonin, a neurotransmitter, and low levels of serotonin are now strongly linked to violent and impulsive behavior.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
What To Do About Crime
Must-Reads from Magazine
Good intentions, tragic consequences.
Chicago, Illinois — Andy has little time to chitchat. There are hundreds of hot towels to sort and fold, and when that’s done, there are yet more to wash and dry. The 41-year-old is one of half a dozen laundry-room workers at Misericordia, a community for people with disabilities in the Windy City. He and his colleagues, all of whom are intellectually disabled and reside on the Misericordia “campus,” know that their work has purpose, and they delight in each task and every busy hour.
In addition to his job at the laundry room, Andy holds two others. “For two days I work at Sacred Heart”—a nearby Catholic school—“and at Target. Target is a store, a big super-store. At Sacred Heart, I sweep floors and tables.”
“Ah, so you’re the janitor there?” I follow up.
“No, no! I just clean. I love working there.”
Andy’s packed schedule is typical for the higher-functioning residents at Misericordia, many of whom juggle multiple jobs. Their work at Misericordia helps meet real community needs—laundry, recycling, gardening, cooking, baking, and so on—while preparing residents for the private labor market. Andy has already found competitive employment (at Target), but many others rely on Misericordia’s own programs to stay active and employed.
Yet if progressive lawmakers and minimum-wage crusaders have their way, many of these opportunities would disappear, along with the Depression-era law which makes them possible.
The law, Section 14(c) of the Fair Labor Standards Act, permits employers to pay people with disabilities a specialized wage based on their ability to perform various jobs. It thus encourages the hiring of the disabled while ensuring that they are paid a wage commensurate with their productivity. The law safeguards against abuse by, among other things, requiring employers to regularly review and adjust wages as disabled employees make productivity gains. Many of these employers are nonprofit entities that exist solely to provide meaningful work for the disabled.
Only 20 percent of Americans with disabilities participate in the labor force. The share is even smaller among those with intellectual and developmental disabilities. For this group, work isn’t mainly about money—most of the Misericordia residents are oblivious to how much they get paid—so much as it is about purpose and community. What the disabled seek from work is “the feeling of safety, the opportunity to work alongside friends, and an atmosphere of kindness and understanding,” says Scott Mendel, chairman of Together for Choice, which campaigns for freedom of choice for the disabled and their families. (Mendel’s daughter, who has cerebral palsy, lives and works at Misericordia.)
Abstract principles of economic justice, divorced from economic realities and the lived experience of people with disabilities, are a recipe for disaster in this area. Yet that’s the approach taken by too many progressives these days.
Last month, for example, seven Senate progressives led by Elizabeth Warren of Massachusetts wrote a letter to Labor Secretary Alexander Acosta denouncing Section 14(c) for setting “low expectations for workers with disabilities” and relegating them to “second-class” status. The senators also took issue with so-called sheltered workshops, like those at Misericordia, which are specifically designed to help the disabled find pathways to market employment. Activists at the state level, meanwhile, continue to press for the abolition of such programs, and they have already succeeded in restricting or limiting them in a number of jurisdictions, most notably in Pennsylvania, where such settings have been all but eliminated.
While there have been a few, notorious cases of 14(c) and sheltered-workshop abuse over the years, existing law provides mechanisms for punishing firms for misconduct. Getting rid of 14(c) and sheltered workshops, however, could potentially leave hundreds of thousands of disabled people unemployed. Activists have yet to explain what it is they expect these newly jobless to do with their time.
Competitive employment simply isn’t an option for many of the most disabled. And even those like Andy, who are employed in the private economy, tend to work at most 20 hours a week at their competitive jobs. What would they do with the rest of their time, if sheltered workshops didn’t exist? Most likely, they would “veg out” in front of a television. Squeezing 14(c) program and forcing private employers to pay minimum wage to workers whose productivity falls far short of the norm wouldn’t improve the lot of the disabled; it would leave them jobless.
Economic reality is reality no less for the disabled.
Nor have progressives accounted for the effects on the lives of the disabled in jurisdictions that have restricted sheltered workshops. “None of these states have done an adequate job of ascertaining whether these actions actually enhanced the quality of life for the individuals affected,” a study in the Social Improvement Journal concluded last year. Less time in sheltered workshops, the study found, “was not replaced with a corollary increase in the use of more integrated forms of employment.” Rather, “these individuals were essentially unemployed, engaging in made-up day activities.”
Make-work is not what Andy and his colleagues are up to today at Misericordia. They complete real tasks, which benefit their fellow residents in concrete ways. “This work is training, but it also gives them meaning,” one Misericordia director told me. “It’s not just doing meaningless work, but it’s going toward something. We’re not setting them up to do something that someone else takes apart. This is something that’s needed.” Yet, in the name of economic justice, progressives are on the verge of depriving men and women like Andy of the dignity of work and the freedom of choice that non-disabled Americans take for granted.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Reminding voters what Democratic governance means.
To paraphrase New York Times columnist Ross Douthat (with apologies), the less Republicans do in office, the more popular they generally become. That is, when the GOP exists solely in voters’ minds as a bulwark against cultural and political liberalism, it can cobble together a winning coalition. Likewise, Democrats regain the national trust when they serve only as an obstacle to Republican objectives. It’s when both parties begin to talk about what they want to do with their power that they get into trouble.
That is an over-simplification, but the core thesis is an astute one. In an age of negative partisanship and without an acute foreign or domestic crisis to focus the national mind, it’s not unreasonable to presume that both parties’ chief value is defined in negative terms by the public. Considering how little of the national dialogue has to do with policy these days, general principles and heuristics are probably how most marginal voters navigate the political environment.
Somewhere along the way, though, Democrats managed to convince themselves that they cannot just be the anti-Donald Trump party. Their most influential members have become convinced that the party needs to articulate a positive agenda beyond a set of vague principles. For the moment, Democrats who merely want to present themselves as unobjectionable alternatives to Trumpism without going into much broader detail appear to be losing the argument.
According to a study of campaign-season advertisements released on Friday by the USA Today Network and conducted by Kantar Media’s Campaign Marketing Analysis Group, Democrats are not leaning into their opposition to Trump. While over 44,000 pro-Trump advertisements from Republican candidates have aired on local broadcast networks, only about 20,000 Democratic ads have highlighted a candidate’s anti-Trump bona fides. “Trump has been mentioned in 27 [percent] of Democratic ads for Congress, overwhelmingly in a negative light,” the study revealed. In the same period during the 2014 midterm election cycle, by contrast, 60 percent of Republican advertisements featured President Barack Obama in a negative light.
There are plenty of caveats that should prevent observers from drawing too many broad conclusions about what this means. First, comparing the political environment in 2018 to 2014 is apples and oranges. Recall that 2014 was Barack Obama’s second midterm election, so naturally enthusiasm among the incumbent party’s base to rally to the president’s defense wanes while the “out-party’s” anxiety over the incumbent president grows. If Donald Trump’s job-approval rating is still anemic in September, it is reasonable to expect that Republican candidates will soft-peddle their support for the president just as Democrats did in 2010. Second, Democrats running against Democrats in a Democratic primary race may not feel the need to emphasize their opposition to the president, since that doesn’t create a stark enough contrast with their opponent.
And yet, the net effect of the primary season is the same. Democrats aren’t just informing voters of their opposition to how Trump and the Republican Party have managed the nation’s affairs; they’re describing what they would do differently. By and large, the Democratic Party’s agenda consists of “doubling” spending on social-welfare programs, education, and infrastructure, and promising a series of five-year-plan prestige projects. But Democratic candidates are also leaning heavily into divisive social issues.
The themes that Democratic ads have embraced so far range from support for new gun-control measures (“f*** the NRA,” was one New Mexico candidate’s message), to protecting public funding for Planned Parenthood, to promoting support for same-sex marriage rights, to attacking Sinclair Broadcasting (which happened to own the network on which that particular ad ran). A number of Democratic candidates are running on their support for a single-payer health-care system, including the progressive candidate in Nebraska’s GOP-leaning 2nd Congressional District who narrowly defeated an establishment-backed former House member this week, putting that seat farther out of the reach of Democrats in November.
In the end, messages like these animate the Democratic Party’s progressive base, but they have the potential to alienate swing voters. That may not be enough to overcome the electorate’s tendency to reward the “out-party” in a president’s first midterm election. And yet, the risk Democrats run by being specific about what they actually want to do with renewed political power cannot be dismissed. Democrats in the activist base are convinced that embracing conflict-ridden identity politics is a moral imperative, and the party’s establishmentarian leaders appear to believe that being anti-Trump is not enough to ensure the party’s success in November. All the while, the Democratic Party’s position in the polls continues to deteriorate.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Meritocracy is in the eye of the beholder.
A running theme in Jonah Goldberg’s fantastic new book, Suicide of the West, is the extent to which those who were bequeathed the blessings associated with classically liberal capitalist models of governance are cursed with crippling insecurity. Western economic and political advancement has followed a consistently upward trajectory, albeit in fits and starts. Yet, the chief beneficiaries of this unprecedented prosperity seem unaware of that fact. In boom or bust, the verdict of many in the prosperous West remains the same: the capitalist model is flawed and failing.
Capitalism’s detractors are as likely to denounce the exploitative nature of free markets during a downturn as they are to lament the displacement and disorientation that follows when the economy roars. The bottom line is static; only the emphasis changes. Though this tendency is a bipartisan one, capitalism’s skeptics are still more at home on the left. With the lingering effects of the Great Recession all but behind us, the liberal argument against capitalism’s excesses has shifted from mitigating the effects on low-skilled workers to warnings about the pernicious effects of prosperity.
Matthew Stewart’s expansive piece in The Atlantic this month is a valuable addition to the genre. In it, Stewart attacks the rise of a permanent aristocracy resulting from the plague of “income inequality,” but his argument is not a recitation of the Democratic Party’s 2012 election themes. It isn’t just the mythic “1 percent,” (or, in the author’s estimation, the “top 0.1 percent”) but the top 9.9 percent that has not only accrued unearned benefits from capitalist society but has fixed the system to ensure that those benefits are hereditary.
Stewart laments the rise of a new Gilded Age in America, which is anecdotally exemplified by his own comfort and prosperity—a spoil he appears to view as plunder stolen from the blue-collar service providers he regularly patronizes. You see, he is a member of a new aristocracy, which leverages its economic and social capital to wall itself off from the rest of the world and preserves its influence. He and those like him have “mastered the old trick of consolidating wealth and passing privilege along at the expense of other people’s children.” This corruption and Stewart’s insecurity is, he contends, a product of consumerism. “The traditional story of economic growth in America has been one of arriving, building, inviting friends, and building some more,” Stewart wrote. “The story we’re writing looks more like one of slamming doors shut behind us and slowly suffocating under a mass of commercial-grade kitchen appliances.”
Though he diverges from the kind of scientistic Marxism reanimated by Thomas Piketty, Stewart nevertheless appeals to some familiar Soviet-style dialectical materialism. “Inequality necessarily entrenches itself through other, nonfinancial, intrinsically invidious forms of wealth and power,” he wrote. “We use these other forms of capital to project our advantages into life itself.” In this way, Stewart can have it all. The privilege enjoyed by the aristocracy is a symptom of Western capitalism’s sickness, but so, too, are the advantages bestowed on the underprivileged. Affirmative action programs in schools, for example, function in part to “indulge rich people in the belief that their college is open to all on the basis of merit.”
It goes on like this for another 13,000 words and, thus, has the strategic advantage of being impervious to a comprehensive rebuttal outside of a book. Stewart does make some valuable observations about entrenched interests, noxious rent-seekers, and the perils of empowering the state to pick economic winners and losers. Where his argument runs aground is his claim that meritocracy in America is an illusion. Capitalism is, he says, a brutal zero-sum game in which true advancement is rendered unattainable by unseen forces is a foundational plank of the liberal American ethos. This is not new. Not new at all.
Much of Stewart’s thesis can be found in a 2004 report in The Economist, which alleges that the American upper-middle-class has created a set of “sticky” conditions that preserve their status and result in what Teddy Roosevelt warned could become an American version of a “hereditary aristocracy.” In 2013, the American economist Joseph Stiglitz warned that the American dream is dead, and the notion that the United States is a place of opportunity is a myth. “Since capitalism required losers, the myth of the melting pot was necessary to promote the belief in individual mobility through hard work and competition,” read a line from a 1973 edition of a National Council for the Social Studies-issued handbook for teachers. The Southern Poverty Law Center, which for some reason produces a curriculum for teachers, has long recommended that educators advise students poverty is a result of systemic factors and not individual choices. Even today, a cottage industry has arisen around the notion that Western largess is decadence, that meritocracy is a myth, and that arguments to the contrary are acts of subversion.
The belief that American meritocracy is a myth persists despite wildly dynamic conditions on the ground. As the Brookings Institution noted, 60 percent of employed black women in 1940 worked as household servants, compared with just 2.2 percent today. In between 1940 and 1970, “black men cut the income gap by about a third,” wrote Abigail and Stephan Thernstrom in 1998. The black professional class, ranging from doctors to university lecturers, exploded in the latter half of the 20th Century, as did African-American home ownership and life expectancy rates. The African-American story is not unique. The average American income in 1990 was just $23,730 annually. Today, it’s $58,700—a figure that well outpaces inflation and that outstrips most of the developed world. The American middle-class is doing just fine, but that experience has not come at the expense of Americans at or near the poverty line. As the economic recovery began to take hold in 2014, poverty rates declined precipitously across the board, though that effect was more keenly felt by minority groups which recovered at faster rates than their white counterparts.
As National Review’s Max Bloom pointed out last year, 13 of the world’s top 25 universities and 21 of the world’s 50 largest universities are located in America. The United States attracts substantial foreign investment, inflating America’s much-misunderstood trade deficit. The influx of foreign immigrants and legal permanent residents streaming into America looking to take advantage of its meritocratic system rivals or exceeds immigration rates at the turn of the 20th Century. You could be forgiven for concluding that American meritocracy is self-evident to all who have not been informed of the general liberal consensus. Indeed, according to an October 2016 essay in The Atlantic by Victor Tan Chen, the United States so “fetishizes” meritocracy that it has become “exhausting” and ultimately “harmful” to its “egalitarian ideals.”
Stewart is not wrong that there has been a notable decline in economic mobility in this decade. That condition is attributable to many factors, ranging from the collapse of the mortgage market to the erosion of the nuclear family among lower-to middle-class Americans (a charge supported by none-too-conservative venues like the New York Times and the Brookings Institution). But Mr. Stewart will surely rejoice in the discovery that downward economic mobility is alive and well among the upper class. National Review’s Kevin Williamson observed in March of this year that the Forbes billionaires list includes remarkably few heirs to old money. “According to the Bureau of Labor Statistics, inherited wealth accounts for about 15 percent of the assets of the wealthiest Americans,” he wrote. Moreover, that list is not static; it churns, and that churn is reflective of America’s economic dynamism. In 2017, for example, “hedge fund managers have been displaced over the last two years not only by technology billionaires but by a fish stick king, meat processor, vodka distiller, ice tea brewer and hair care products peddler.”
There is plenty to be said in favor of America’s efforts to achieve meritocracy, imperfect as those efforts may be. But so few seem to be touting them, preferring instead to peddle the idea that the ideal of success in America is a hollow simulacrum designed to fool its citizens into toiling toward no discernable end. Stewart’s piece is a fine addition to a saturated marketplace in which consumers are desperate to reward purveyors of bad news. Here’s to his success.
Choose your plan and pay nothing for six Weeks!
Podcast: Donald Trump Jr. moves the ball forward.
We try, we really do try, to sort through the increasingly problematic “Russian collusion” narrative and establish a timeline of sorts—and figure out what’s real and what’s nonsense. Do we succeed? Give a listen.
Don’t forget to subscribe to our podcast on iTunes.
Choose your plan and pay nothing for six Weeks!
An immigrant from Italy, Morais had taught himself English utilizing the King James Bible. Few Americans spoke in this manner, including Abraham Lincoln. Three days later, the president himself reflected before an audience: “How long ago is it?—eighty-odd years—since on the Fourth of July for the first time in the history of the world a nation by its representatives assembled and declared as a self-evident truth that ‘all men are created equal.’” Only several months later, at the dedication of the Gettysburg cemetery, would Lincoln refer to the birth of our nation in Morais’s manner, making “four score and seven years ago” one of the most famous phrases in the English language and thereby endowing his address with a prophetic tenor and scriptural quality.
This has led historians, including Jonathan Sarna and Marc Saperstein, to suggest that Lincoln may have read Morais’s sermon, which had been widely circulated. Whether or not this was so, the Gettysburg address parallels Morais’s remarks in that it, too, joins mourning for the fallen with a recognition of American independence, allowing those who had died to define our appreciation for the day that our “forefathers brought forth a new nation conceived in liberty.” Lincoln’s words stressed that a nation must always link civic celebration of its independence with the lives given on its behalf. Visiting the cemetery at Gettysburg, he argued, requires us to dedicate ourselves to the unfinished work that “they who fought here have thus far so nobly advanced.” He went on: “From these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion,” thereby ensuring that “these dead shall not have died in vain.”
The literary link between Morais’s recalling of Jerusalem and Lincoln’s Gettysburg Address makes it all the more striking that it is the Jews of today’s Judea who make manifest the lessons of Lincoln’s words. Just as the battle of Gettysburg concluded on July 3, Israelis hold their Memorial Day commemorations on the day before their Independence Day celebrations. On the morning of the Fourth of Iyar, a siren sounds throughout the land, with all pausing their everyday activities in reverent memory of those who had died. There are few more stunning images of Israel today than those of highways on which thousands of cars grind to a halt, all travelers standing at the roadside, and all heads bowing in commemoration. Throughout the day, cemeteries are visited by the family members of those lost. Only in the evening does the somber Yom Hazikaron give way to the joy of the Fifth of Iyar’s Yom Ha’atzmaut, Independence Day. For anyone who has experienced it, the two days define each other. Those assembled in Israel’s cemeteries facing the unbearable loss of loved ones do so in the knowledge that it is the sacrifice of their beloved family members that make the next day’s celebration of independence possible. And the celebration of independence is begun with the acknowledgement by millions of citizens that those who lie in those cemeteries, who gave “their last full measure of devotion,” obligate the living to ensure that the dead did not die in vain.
The American version of Memorial Day, like the Gettysburg Address itself, began as a means of decorating and honoring the graves of Civil War dead. It is unconnected to the Fourth of July, which takes place five weeks later. Both holidays are observed by many (though not all) Americans as escapes from work, and too few ponder the link between the sacrifice of American dead and the freedom that we the living enjoy. There is thus no denying that the Israelis’ insistence on linking their Independence Day celebration with their Memorial Day is not only more appropriate; it is more American, a truer fulfillment of Lincoln’s message at Gettysburg.
In studying the Hebrew calendar of 1776, I was struck by the fact that the original Fourth of July, like that of 1863, fell on the 17th of Tammuz. It is, perhaps, another reminder that Gettysburg and America’s birth must always be joined in our minds, and linked in our civic observance. It is, of course, beyond unlikely that Memorial Day will be moved to adjoin the fourth of July. Yet that should not prevent us from learning from the Israeli example. Imagine if the third of July were dedicated to remembering the battle that concluded on that date. Imagine if “Gettysburg Day” involved a brief moment of commemoration by “us, the living” for those who gave the last full measure of devotion. Imagine if tens—perhaps hundreds—of millions of Americans paused in unison from their leisure activities for a minute or two to reflect on the sacrifice of generations past. Surely our observance of the Independence Day that followed could not fail to be affected; surely the Fourth of July would be marked in a manner more worthy of a great nation.