Growing Up Crowded
Before the late 1940's, the annual number of live births in the United States was one of our most stable statistics. For 21 of the 37 years from 1909 through 1945, that number was between 2.75 and 3 million. During the depression decade of 1931-40, which produced the entrants to maturity of the years soon after World War II, the number of live births ranged from 2.3 million to 2.56 million. Then, in 1946, there were 3.4 million births; the next year the number jumped further, to 3.8 million, and in 1953 it crossed the 4-million mark. School enrollment, which in the period 1920-50 had risen only from 23 million to 28 million, jumped to 36 million in 1956 and 44 million in 1962.
Eventually, inexorably, these huge “cohorts” moved into the job market. At the peak period of arms production in World War II, employment in America had topped out at 54 million; at war's end Henry Wallace, the McGovern of his day, was calling for an impossible target of 60 million jobs. Yet by early 1975, an American economy that was supplying 84 million jobs (one million more than two years before) saw unemployment rise to the frightening level of 9 per cent and faced the need to generate 2 million new jobs every year to keep the unemployment rate from rising even higher.
These are developments of earthquake dimensions in social and economic history, and if the upward trend of birth rates had proved a permanent phenomenon in American society, as the Census Bureau was predicting only yesterday, we would soon be facing exquisitely difficult decisions about the role of government in restricting production, resettling surplus populations, and allocating access to health services, recreational facilities, and ultimately water and perhaps food. But in the mid-1970's the ratio between the number of births and the number of women of child-bearing age dropped so low that in the absence of immigration the nation will eventually begin to lose population: the birth cohorts of 1973 and 1974 were down to a point below 3.2 million, the pre-1946 mark. What we have, then, is a single twenty-year bulge of very large age cohorts, moving along the snake of time as an undigestible mass that will in each period of its maturity distort the functioning of the relevant institutions of American society.
The extent of the crisis represented by this sudden bulge—indeed, despite the talk of a “population explosion,” the very fact that there was a crisis—was masked by technology and affluence. In earlier times, so rapid a growth of population (especially non-productive population) would have put intolerable pressure on the food supply and on sanitary facilities. “Unless an emigration takes place,” Thomas Malthus said in articulating the “laws” of population increase, “the deaths will shortly exceed the births. . . . Where there are no other depopulating causes, every country would, without doubt, be subject to periodical pestilences or famines.” Instead, food production expanded so rapidly in America that diet actually improved, and medical technology reduced the incidence of disease so effectively that these huge age cohorts grew up with a sense of personal invulnerability never known before. As Dr. Lewis Thomas has put it: “Until a few decades ago . . . we moved, with our families, in and out of death. We had lobar pneumonia, meningococcal meningitis, streptococcal infections, diphtheria, endocarditis, enteric fevers, various septicemias, syphilis, and always, everywhere, tuberculosis. Most of these have now left us, thanks to antibiotics, plumbing, civilization, and money. But we remember.” The young, now young adults, no longer even have such memories. Polio had been banished before they could read the newspapers.
In some areas other than food and medicine, however, the young of the twenty-year bulge, and their families, did not fare so well; moreover, those born in the first decade of the bulge (and their families) fared considerably less well than those of the second decade. Americans born in the years 1946-56 started life in hospital corridors, were mothered in too-small apartments or suburban boxes, and schooled on double sessions in crowded classrooms. Those who went to college were likely to be processed through facilities originally planned for perhaps half as many students; those who did not go to college often found themselves in that great waste-barrel of the latter 1960's, the U.S. army in Vietnam. Reaching the age when they could open their mouths as a group, they said, not surprisingly, that the “system” did not work right any more.
Both business and government were initially sluggish in responding to the needs of the expanding American family of 1946-56.1 It is, I think, significant that the first major penetration by Japanese manufactured goods in the American economy took the form of products for the youth market: motorcycles, transistor radios, portable phonographs, cheap guitars. But once we moved, we moved far. In the fifteen years from 1931 through 1945, we had built only 4.6 million new dwelling units. In the five years 1946 through 1950 we added another 4.6 million, and in the five years 1951 through 1956 we built another 5.9 million. In the two years 1971 and 1972, when the first-decade children of the baby boom began to need homes of their own, more new housing was added to the American stock than had been built in the entire 15-year period of 1931-1945.
Similarly with education: expenditures on the public schools rose from less than $3 billion in 1946 to almost $6 billion in 1950, to $11 billion in 1956, and crossed the $20 billion mark in 1963. By 1972, educators were claiming that the schools were starved though their budgets were over $50 billion a year. Inflation accounts for only a relatively small part of these increases—in 1946 we were spending 1.5 per cent of the Gross National Product on public schools, by 1972 we were spending 4.7 per cent.
As a consequence of all this activity, the young born in the second half of the twenty-year bulge were much better served by society than their older brothers and sisters; they are, not surprisingly (only the newspapers are surprised), much less difficult to manage, much less disenchanted with the “system.” But there are still too many of them, and there will be trouble for them too down the road.
The most significant influence exerted on society by the existence of these extraordinary age cohorts was the crime wave, for among the characteristics of the young is that they get into trouble with the law. The Children's Bureau has estimated that between the ages of ten and seventeen as much as a fifth of all boys will make some contact with a juvenile court. Three-fifths of all serious crime in America is committed by adolescents and young men between the ages of fifteen and twenty-four—a group that grew by 50 per cent between 1960 and 1970.
It seems fair to say that we handled this situation badly. At precisely the time when it was most important to convince the young that criminal activity would be a losing proposition, our criminologists abandoned this goal entirely and went chasing after the butterfly of rehabilitation; the leading textbook in criminology haughtily dismissed the idea of deterrence as “simply a derived rationalization of revenge.” Meanwhile, our appellate courts wholeheartedly adopted the philosophy Roscoe Pound described in 1906 as “the sporting theory of justice.” (“The inquiry is not,” he said, “What do substantive law and justice require? Instead, the inquiry is: Have the rules of the game been carried out strictly?”) Nothing could have been more damaging to the socialization of the young than this transfer of attention from the question of whether or not a wrong has been committed (a question even the most poorly brought up have to confront occasionally at home and among friends) to the question of whether those making the accusations have rigorously followed every by-way of approved procedure.
The social effect of reforms in the juvenile-court system and in criminal procedure was that an entire generation, huge in number, lost the sense that, in the words of Francis Allen, “the criminal law has a general preventive function . . . that the influence of criminal sanctions on the millions who never engage in serious criminality is of greater social importance than their impact on the hundreds of thousands who do.” Deprived of support from the criminal-justice system—or from the great organs of opinion, which through the 1960's looked rather indulgently on young, especially black, crime—the home, the school, and the church all lost ground to the street. It is no more than guesswork to say that a concerted attack on the dominance of the street would have prevented the present situation in the urban slums, where terror survives the ruin it has gorged—the break-up of families in a transplanted rural community stunned by the anonymity of the cities might well have blunted the most calculated intervention by the most tough-minded government. I cannot believe, though, that we would be worse off than we are.
In any case, the street has won, and not until the mid-1980's will diminishing age cohorts give us a chance to do more than establish a foundation for the rebuilding of urban institutions. Long after today's urban young have grown up and moved to the suburbs, blighted areas of our central cities will testify to the harm done by the expansion simultaneously of the numbers of young men and of the extent of their “rights.”
Less important than the growth of crime but more visible as a function of the baby boom was the exaggeration of that cult of youth which has never been entirely absent from the American scene. For the adult members of an immigrant family, it has always been a kind of death to leave home and come to the New World; the justification has been opportunity for one's children. Writing at the high point of immigration to America, John Dewey looked ahead to a “century of the child.” The moment arrived fifty years later.
In the accelerating affluence of the 1960's, the young acquired increasing purchasing power. They used it to buy psychic space that would be all their own: what those who pandered to them, for commercial or personal reasons, called a counterculture. Through the 1950's and 1960's, huge numbers of the young grew up in a state of virtual isolation from the productive economic life of the society. Very few had more than the remotest idea of what their fathers (and mothers) did for a living; it had long been the exception rather than the norm for a son to plan to follow his father's occupation, and the idea of rising through apprenticeship had long been discarded in favor of extended schooling outside the home.
The reasons for this are complicated and varied. Americans have always believed in education as a thing in itself and in longer years of schooling as an independent value; Dewey proposed that the extension of childhood and youth was itself a measure of the degree of civilization of a society. In addition, as the baby boom neared the age of employment, labor unions became insistent on the need to keep young people out of the job market as long as possible. Seniority provisions were strengthened in all union contracts to guarantee that the young of the 1970's could not, like the young of all previous periods, make a place for themselves by competing against the older generation. Pressure for increases in the minimum wage were brought on legislatures partly for their indirect benefit to union members (all of whom were working for scales above the minimum wage, yet stood to benefit from a general upward push), but mostly to strengthen the protection of the existing work force against what would inevitably be cut-rate competition. A high minimum wage held down youth employment because the average adolescent—lacking work habits and experience, apt to change jobs frequently—was not worth what the government said he had to be paid. In the absence of a pull from the job market, there was little reason not to stay in school (when the economy overinflated in 1972, first-time college enrollment dipped substantially; in the mounting recession of fall 1974, first-time enrollment rose). And until the end of the military draft, of course, college was much favored as the most desirable way to postpone, and perhaps avoid, military service and the threat of Vietnam.
But the prime reason so many young people continued into college was the incessantly reiterated and widely advertised promise that a college education was the best, even the only, route to a white-collar future and a good income. For some years the American Council on Education has conducted a large-scale survey of the attitudes of incoming freshman at the nation's colleges and universities. In the version of this opinion poll employed in 1971, ACE asked the new arrivals why they were going to college, and by an overwhelming margin, especially among the males, the first answer was “to get a better job.” (The question was not asked again in later years, presumably because the response was considered unsatisfactory.)
The effort to keep youngsters in school was remarkably successful. James Coleman reports that “while the population of sixteen- to nineteen-year-olds increased between 1957 and 1970 by 6 million, the ‘not enrolled in school’ labor force component of this age group increased by only 0.6 million. Similarly, in the twenty- to twenty-four-year-old age group, which increased by 6.5 million between 1960 and 1970, the ‘not enrolled’ labor force increased by only 2 million in the same period.” Every year since 1970, however, the problem has become less manageable, partly because the young do become older and dribble out of even the most extended educational program, partly because the numbers of young are still increasing. The first 4-million age cohort turned twenty in 1973, and from then till 1984 we shall have to absorb, every year, 80 per-cent more twenty-year-olds than we had to absorb in any year of the 1950's.
Compounding this problem is the fact that the one large section of the educational enterprise that did give specific career direction was wrong in its choice. The first purpose of American higher education had always been teacher training, and to the extent that secondary education was a preparation for college, teacher training was part of its purpose, too. In the 1950's and 1960's, education was the most rapidly expanding growth industry in the country, and those involved in it quite naturally prepared for continuing increases in employment. The momentum of this drive was such that the number of teachers continued to rise after the falling birth rate had started to reduce the number of pupils in the schools. Over the period 1962-63 to 1972-73, the average daily register at public elementary and secondary schools rose only 17.2 per cent, from 38.6 million to 45.2 million, while the total instructional staff in the schools rose 42.3 per cent, from 1.65 million to 2.35 million. Well into the 1970's, colleges of education simply ignored the declining birth rate, and continued to train larger and larger numbers of students for careers as teachers. The bubble burst with the class of 1973: in spring 1974, no fewer than 128,000 young teachers who had been licensed the previous June had still not been able to find jobs in schools. Another 110000 joined their ranks in June 1974.
On the college level, the discrepancy between expectable demand and planned supply was even more dramatic—and even less excusable, for Alan Cartter of the American Council on Education had publicly demonstrated as early as 1965 that by the 1980's the need for new Ph.D's to teach in colleges would drop to zero. (In fact, it will drop below zero: on Cartter's 1972 projections, which by 1974 looked optimistic, the colleges would have too many professors in 1985-88 even if they hired nobody at all during those years.) Nevertheless, the output of new Ph.D's rose from fewer than 10,000 a year in 1960 to more than 34,000 in 1972. A reasonable guess at mid-decade is that half of the graduate students who hope to use their Ph.D. as an entry to teaching on the college level will find themselves disappointed—and the proportion of the unlucky will rise in the 1980's.
Elephantiasis was everywhere in American higher education in the early 1970's. By the middle of the decade, the law schools were turning-out new lawyers at a rate of 25,000 a year, up from fewer than 10,000 a year in the 1950's—even though the spread of no-fault insurance promised to take away from the legal profession that quarter of its income traditionally derived from automobile-accident cases. But the real tragedy lies on the undergraduate college level, where millions of young Americans have been drawn by hopes that cannot be realized—and other millions will suffer for it.
Since the latter 1960's, roughly half of each age cohort has gone to college, up from roughly one quarter in the mid-1950's. By the most charitable estimates of the Labor Department, however, only 13 to 14 per cent of the new jobs that will be available in the fourth quarter of the century will be jobs of the kind that have traditionally required a college education, which means that half the college graduates (not to mention all the dropouts) will find less reward for their efforts than they had been led to anticipate.
Concern has been voiced in recent decades over the possibility that American colleges would produce an overeducated, unemployed intellectual proletariat. This has not happened, and it will not happen. What has been happening is something worse: jobs that were once available without educational credentials have been redefined to require them, in order to assure college graduates (by definition, the children of the middle class) a protected market of sufficient size. The price is paid by those who do not go to college (predominantly the children of the working class), whose opportunities are artificially and quite unnecessarily restricted. The headlong rush to a credentialed society was checked to a degree by the civil-rights movement, representing mostly a black community of lesser educational attainments; and in the Duke Power case in 1971, the Supreme Court held that educational criteria could not be used in determining eligibility for employment in the absence of some reasonable demonstration that the education was required by the job. In a pinch, however, such demonstrations can be supplied for enough jobs to insure the continued growth of credentialism—and the colleges, faced with the prospect of diminished enrollment from the smaller age cohorts of the future, must intensify both their overt and their subtle efforts to convince government licensing bodies, large employers, and the young that a college education is a necessity.
Arriving at maturity, then, the members of the huge age cohorts of the young confront an economy in which there are not enough jobs to go around, and nowhere near enough of the kinds of jobs considered proper for a liberally educated person to absorb the supply of newcomers certified as liberally educatéd. The solution for the educated will be the preemption of a higher proportion of the available jobs for their exclusive occupancy; for large numbers of those who started off unlucky in nature or nurture, there will be no solution.
A surplus of labor from a high birth rate and the eviction of the peasantry from the farms created in Britain the long, gradual deflation of the second half of the 19th century. This peaceful, profitless prosperity produced the modern world in more ways than one: observing it stimulated in Karl Marx his peculiar vision of the future of capitalism. Competition for jobs, Marx noted, drove down money wages (neither depressing nor raising the low standard of living, because the gradually reduced pay envelope continued to buy about as much as before). Meanwhile, the price of interest-bearing assets tended to rise, reducing the apparent “profit” on capital, because the value of a given quantity of earnings was ever-increasing. Marx saw an inevitable squeeze on the working classes, coupled with an “iron law” of diminishing profits, until capitalism collapsed in a crisis of underconsumption. Later, when none of this happened, Lenin amended the theory to show how imperialist policies, by creating markets in foreign countries, could postpone the crisis of underconsumption, though only by further impoverishing the workers of the home country. This was wrong too; but its corollary—that imperialism benefited the people of the colonized countries more than the people of the home country—did prove out; it is not much talked about these days.
The legacy of Malthus and Marx and other 19th-century economists has made it difficult for today's analysts of social phenomena to understand the changes that affluence and technology have wrought in the pressures that unusually large numbers of young people now apply to their community. It is taken for granted today that each new arrival must be provided with food, clothing, and shelter to a minimum standard that would have seemed notable luxury in the 19th (or early 20th) century. A high birth rate thus tends, within a few years, to increase substantially the quantity of national product that must be consumed here and now rather than saved for the future. Meanwhile, technology requires that each new job be backed by capital investments of a size that would have been unimaginable as recently as a generation ago. Assuming the persistence of women's propensities to work for a living, the United States over the next decade will have to add 2 million new jobs a year simply to keep an already high unemployment rate from rising further. In 1975 dollars, the average capital cost per job is something over $40,000. Thus an annual investment of $80 billion—6 per cent of the net national income—will be required merely to absorb the new workers and maintain the average real income of members of the work force. (Jobs created without the minimum capital investment presumably yield less output per man-hour worked, thus reducing the workers' average product and, unavoidably, average income.) In the absence of effective conscious or automatic procedures for long-range decision-making, the simultaneous demand for increased consumption and increased capital accumulation generates an accelerating inflation.
In the conflict of interest among generations, it seems clear enough that inflation benefits the young at the expense of their elders. Much of the wealth of the community is held in the form of titles to money—savings accounts, government bonds, insurance policies. Inflation depreciates the real value of these titles to money, few of which are owned by the young. Though alert entrepreneurs and speculators can make a good thing of inflation, most already established citizens are losers. Greater shares of the national product will go to current earnings rather than to stored wealth—and the young can fight with some success for their just proportion of the former. Easily available consumer credit allows the young to accumulate what the economist Harry Johnson has called “consumer capital”—an automobile, appliances in the kitchen and laundry room, even a house—at bargain prices, because the loan taken when the purchase is made can be paid off from ever-rising salaries in money of ever-depreciating value. Interest rates rise, but not that much (especially on consumer loans, which are politically sensitive). The fact that inflation benefits the debtor class at the cost of the creditor class means that it benefits the young at the expense of their parents. Thus very large cohorts produce continuing political pressure for inflationary policies.
Unfortunately, an inflation-prone society is an uncomfortable environment in which to live. As they achieve full maturity, today's young will find themselves deprived of that satisfying sense of accomplishment and security which has historically made early middle age “the prime of life.” Indeed, about the only time one can honestly predict happiness for most of the children we brought into the world in such numbers in the 1950's is their later middle age, when their own children will no longer be a burden, the housing and capital accumulations of the past will be entirely adequate, high personal consumption will become socially desirable, and it will be truly practicable to clean up the environment because the capital investment funds required for this purpose will no longer be needed for job creation. The time for “no growth” will come when there is no growth.
The story of the bulge in the demographic snake has an appropriately gray ending. “One of the most radical changes” in modern life, Marion Levy has written, is that “we take it for granted that practically everyone will survive into senility.” By 1975, more than a tenth of the American population was over sixty-five years old, as against only 4 per cent in 1900. In what may turn out to be the most dramatically unfair action against the outsize cohorts of the twenty-year bulge, their parents in 1971 rewrote the social-security laws to assure themselves a highly comfortable retirement at four to five times the monthly income that social security yields today—an income based not on the social-security taxes they were paying themselves but on the taxes that will be paid by the larger work force now coming on. But when the current demographic bulge reaches retirement age, its successors will be too few to support so many old people in so fine a style: the unfunded liabilities of the social-security system will then total well over 2 trillion of today's dollars. Assuming only minor growth in longevity from all the medical research now in progress, a social-security tax rate approaching 30 per cent of the national payroll will then be necessary in order to meet the payouts prescribed by law, and nobody believes that social-security taxes of those dimensions will actually be assessed.
The work of reneging on the promises has already begun. Early in 1975, the Social Security Advisory Council recommended changes in the benefit provisions of the law that would not take effect until after the year 2000. Their impact would be to reduce by about 30 per cent the benefits which those retiring after that year are due to receive, and to postpone the permissible age of retirement with full entitlements. By 2011, when the children born immediately after World War II reach sixty-five, the age of retirement would be set at sixty-six, according to the new recommendations; by 2018, when the first 4-million cohort reaches sixty-five, the age of retirement would rise to sixty-seven; by 2024, when the peak cohort of 1959 will be sixty-five, the age of retirement would be sixty-eight. Informed that these changes were being recommended, the Congressmen most closely involved with social-security legislation reacted with outrage—and no doubt the law will not be changed for some years. But change will almost certainly come, because the only alternative is an immediate, substantial increase in today's social-security taxes, and nobody will vote for that—not today's middle-aged, who would get nothing from it, and not today's young, whose time horizons are nowhere near high enough to encompass the prospect of their own retirement.
I say “almost certainly,” because this generation does have one advantage that will persist: in a democratic society, big age cohorts have big voting strength. More so than any ethnic group, the aged can be welded into a united political force; today's elderly community, much smaller than the prospective retirement cadre of the early 21st century, is already beginning to demonstrate remarkable clout in defining political issues for its own benefit. Of course, we may not get so severe a demographic imbalance between old and young as a straight-line projection of the birth cohorts of the last years would predict. Birth rates are still cyclical; and more than half the girls born in our baby boom have not yet reached the age of reproduction. For the number of live births in the United States to remain at the low 1974 figure of 3.2 million a year would require in the 1980's a birth rate of about 80 a year for each 1,000 women between the ages of eighteen and thirty-eight, which would be only half the rate in their parents' generation and 30 per cent below even today's historic lows. Still, it does seem unlikely that America will again in the foreseeable future produce age cohorts the size of those born in the period 1946-66. Reflecting both selfish and altruistic motives, the conservation ethic will grow stronger in the years ahead; contraception will be increasingly easy and foolproof; and the battle to maintain a horror of abortion as part of our morality has already been definitively lost.
We are now at midpoint in an age that for demographic reasons alone will be unique in American history. Although it is fashionable to be gloomy about the country's future, there will be many reasons to be glad when this present becomes the past.
1 So were sociologists. David Riesman had rested the argument of The Lonely Crowd on an end to the era of population increase; republishing a substantially edited version in paperback in 1953, he noted that “the birth rate has shown an uncertain tendency to rise again, which most demographers think is temporary.”