It was tax relief, not Keynesianism, that propelled American prosperity after World War II.
If you want to understand how the United States became the most prosperous society in the world, start with the ballpoint pen. László Bíró, a Hungarian who had fled the Nazis and gone to Argentina, invented the first working solid-ink pen. When the owner of Goldblatt’s department store in Chicago showed the contraption to a passing salesman named Milton Reynolds, Reynolds decided he could do it better.
The year was 1944. American factories were producing a warplane every five minutes, 150 tons of steel every hour, eight aircraft carriers a month. Milton Reynolds knew nothing about this kind of heavy industrial production. Born Milton Reinsberg in Minnesota (he changed his last name because he feared Midwesterners wouldn’t buy from a Jew), the 52-year-old had spent the war years selling customer signage to department stores, including a “talking sign,” of his own invention, that sat on store counters.
Bíró had leased his design to two American pen makers, Eversharp and Eberhard Faber. The pen had proved popular with British Royal Air Force crews because, unlike a traditional fountain pen, it wouldn’t burst at high altitudes. But Goldblatt’s owner was wondering if it would sell in the United States. Reynolds wasn’t sure. The Bíró pen depended on a complicated rolling-ball mechanism to spread its ink. Someone might be able to make a simpler and cheaper pen, Reynolds decided, by letting gravity feed the ink to the point.
Milton Reynolds had never made a pen in his life. Still, he rented an indoor tennis court that had fallen into disrepair and opened Reynolds Pen Company with two employees. They made the pen’s roller ball from surplus Air Force bombsights and the barrels from surplus aluminum.
On October 29, 1945, just 10 weeks after VJ Day, Reynolds not only beat Eversharp and Eberhard Faber to market, but had 10,000 pens ready for sale in the front window of Gimbel’s in New York City. More than 5,000 shoppers stormed the store. Fifty cops had to be called in to control the crowds. At $12.50, the Reynolds pen cost almost as much as an overnight stay at the Waldorf Astoria. But New Yorkers in the aftermath of war were looking for the perfect present to give returning GIs—and thanks to their wartime jobs, they all had money to spend. Before the day was out, Milton Reynolds had sold every pen.
The story of the ballpoint pen showcases what turned the United States into the world’s most affluent society after 1945: entrepreneurship, innovation, cheap and readily available raw materials, a built-in bias toward competitive production, and a voracious consumer demand—made more voracious by the deprivation of the Depression years and restrictions of World War II.
Periods of economic growth had been known in the past—the 1920s—and they would be known again in the future, as in the Reagan years. But never had there been anything quite like what happened in the 15 years after 1945. It was nothing less than a material transformation of American life that gathered momentum in the coming decades.
In 1945, less than half of American households had a telephone. Making a long-distance call was a cumbersome business, requiring the help of an operator. By 1960, it was rare to find a household that didn’t have a phone. And with the introduction of direct-dial long distance in 1965, the number of such calls rose from 3 million in 1940 to 26 million in 1970.
On the eve of World War II, an electric washing machine was a rare luxury. In 1960, almost 75 percent of households had them. Twenty years later, 70 percent also had an electric dryer.
In 1940, more than a fifth of Americans lived on farms; less than a third of those farms had electricity. A third of those farms had no indoor running water; a tenth had flush toilets. Barely half of all American households had a refrigerator, and 58 percent had no central heating. The typical workweek was 50 hours, with more than half the workforce earning a living through physically demanding labor such as farming, factory work, construction, and mining. With average life expectancy pegged at 63 years, it was common for a person to work until he lost his health.
In just 20 years the existence of an American home without electricity or indoor plumbing had gone from being a fact of life to a national scandal demanding federal action. National income rose from $78 million in 1940 to $409 million in 1960, and then quintupled to $2.3 billion in 1980. Life expectancy for American males rose to almost 70 by the time of John F. Kennedy’s inauguration. A comfortable retirement with a private pension virtually became a human right; by 1980, 70 percent of all workers had one.
Even more remarkably, despite a steady expansion of government programs such as Social Security and the Pentagon and projects such as the Interstate Highway system, national debt between 1946 and 1960 rose by only 6 percent, even as GDP grew by 237 percent. Indeed, achieving an average annual GDP growth rate of 4.75 percent while running a budget surplus during 7 of those 14 years must have seemed a miracle to those who remembered the Great Depression. Today, in the shadow of the slowest economic recovery in modern history—with a growth rate at 1.5 percent—it looks like one, too. Understanding what triggered the Great American Transformation, and what didn’t, might be a valuable guide to the future for the current resident of the White House and those who follow.
Certain long-standing clichés surround the story of America’s “economic takeoff” in the two decades before 1960. One centers on the role of World War II and the federal government’s massive spending to win (some $284 billion, if we include the Lend-Lease program that took place before the U.S. entered the fray). The fact that GDP almost doubled in those same years, with unemployment shrinking to little more than 1 percent, has led many historians and economists, including nearly all Keynesians, to conclude that the two phenomena must be connected. Here is a passage from a typical college textbook:
World War Two ended the depression in the United States. Before the war was over, net farm income almost doubled, and corporate profits after taxes climbed 70 percent. From a total of more than 8 million unemployed in 1940, the curve dropped below a million in 1944….There had been no comparable economic boom in American history.
This is wrong.
World War II did not end the Great Depression; in crucial respects, it may have prolonged it.
Even its amazing rise in industrial production—26 percent in just five years—was only half of what the 1920s had achieved without the same massive government spending. Rather, what the war-production effort did do was to restore American industrial production to something approaching normal levels and raise savings rates to fuel an economic recovery afterwards.
Even the amazing numbers usually given to illustrate the “wartime boom” are softer than they appear at first glance. For example, a drastic drop in unemployment isn’t difficult to engineer when almost 12 million of the eligible workforce get conscripted into the armed forces. Likewise, to quote the economist Robert J. Barro, “The data show that output expanded during World War II by less than the increase in military purchases.” Real non-government GNP growth, which was moving ahead in 1940, actually slowed down in 1942 and then slowed still further in 1943.
In fact, far from having a multiplier effect as Keynesians might suggest, spending on the war may have interrupted an economic recovery already under way. In 1940–41, just before the bombing of Pearl Harbor and when government spending was still at relatively low levels, GNP jumped from $90.5 billion in 1939 to $124.5 billion. Then, with mobilization, private consumption and investment headed south while government deficit spending headed sharply north, rising from $6 billion in 1940 to $89 billion in 1944.
And while millions of Americans found work in wartime factories and shipyards and farm and non-farm incomes steadily rose (for industrial workers, by an average of 70 percent), it was impossible for them to buy new durable goods, from cars to refrigerators, and they found many basic consumer goods, such as coffee, meat, sugar, gasoline, shoes, and newsprint, strictly rationed. In addition, nearly everyone was paying out a growing sum of their income in federal taxes to pay for the war—a tax on incomes as low as $645 a year (less than $7,740 in today’s dollars).
Yet despite the wartime restrictions, Americans still ate better, consumed more meat, bought more shoes and clothing, and used more energy than they had before the war. And even though the United States wound up producing the most munitions of any country in World War II, it was also the least mobilized of all the major combatants. At no time did more than 40 percent of its economy switch over to war material production. That allowed 60 percent of American women to stay home and men such as Milton Reynolds to make sales calls at department stores as if nothing were happening, and dream of better days.
All the evidence suggests, then, that the war didn’t create a strong U.S. economy. It was the strong economy that made mobilization for war possible without impoverishing the country (which is what happened to Great Britain and the Soviet Union). What World War II actually did was sacrifice real present growth in order to defeat the Axis. Yet it also set the table so that “as the war ended,” writes economist Robert Higgs, “real prosperity returned almost overnight.” It was the crucial period of 1945 to 1947 that really marked the start of economic takeoff.
And the key reason was tax cuts.
That brings us to our second myth: that the immediate postwar period saw a second economic surge driven by consumer demand, as Americans flush with victory rushed out to spend their wartime savings on cars, houses, washing machines, radios, televisions, and Milton Reynolds’s ballpoint pens. In this version of the boom, aggregate consumer demand replaced government demand for bombers, submarines, and artillery shells. “During the war, people had accumulated large stores of financial assets,” a prominent economic history textbook published in 1990 put it. “Once the war was over, these savings were released and created a surge in demand.” In short, the Keynesian formula was proved right again.
Is that true? No. It is true that savings rates rose to record heights during the war. After all, there was almost nothing to buy, even as incomes were soaring. But long ago Milton Friedman and Anna Schwartz noted that the postwar period saw no drop in total savings. People’s liquid assets actually continued to grow after the war, from a record $151 billion at the close of 1945 to $168.5 billion by the start of 1948. In other words, few if any consumers really believed good days were here to stay. Most believed what leading economists believed, including many Keynesians: that the war would be followed by a prolonged period of empty factories, unemployment, and renewed depression. People tucked away their savings and battened down the hatches—everyone, that is, except American business.
What really triggered the first takeoff burst wasn’t the unwinding of personal savings but business savings, in the form of a sharp and steady rise in private capital investment. And that was helped by the Big Tax Cut of 1945.
As the war came close to its end, the political consensus among everyone except leftist Democrats was that taxes were too high. Income-tax rates had soared to the point that revenues in 1944 were almost quadruple those of 1940. In addition, a punitive “excess profits” tax had been imposed on businesses on top of the usual corporate rates.
Even before the bomb was dropped on Hiroshima, Congress passed the Tax Adjustment Act, speeding refunds for businesses that were ending their government contracts. Two months later, Congress passed the Revenue Act of 1945, which lowered tax liabilities for 1946 by roughly 13 percent of total federal revenues—one of the most massive tax cuts in American history. At the same time, Congress repealed the corporate excess-profits tax, cut the top marginal tax rate from 94 percent to 86.45 percent, and the lowest marginal rate from 23 percent to 19 percent.
This was a clear-eyed exercise in what would later come to be known as supply-side economics. Senator Walter George of Georgia, chair of the Senate Finance Committee, predicted the tax cut would “so stimulate the expansion of business as to bring in a greater total revenue.” He was right. Revenues soared even as government expenditure continued to fall, and America’s postwar boom was on track.
The Big Tax Cut was helped also by the repeal of wartime wage and price controls—something liberal Democrats and President Harry Truman fiercely resisted. Then, in the 1946 midterm elections, Republicans campaigned on a further tax cut of 20 percent, to be matched by a 50 percent reduction in federal spending. Americans rewarded the GOP with control of both houses of Congress that fall, which led to the phasing out of the last remaining wartime rationing regulations and the passing of a new round of tax cuts in 1947. Democrats won back the House in 1948 and reelected Harry Truman. But it was too late to halt the tide of prosperity that the Big Tax Cut had let loose.
Business investment, which had gone flat during the war, jumped from $10.6 billion in 1945 to $40.6 billion in 1948, a fourfold increase, as plants expanded and retooled for the production of civilian goods. Even as the overall personal-savings rate fell from its wartime highs, the private-investment rate soared from 5 percent to almost 18 percent, with the biggest leap coming in 1946—a leap that would not be reflected in GNP numbers until two years later. Corporate profits soared by nearly 20 percent, as unions scrambled to keep up by pushing for higher wages and more benefits—demands a new, flush corporate America was inclined to meet.
Meanwhile, business savings almost doubled in the same period, from $15.1 billion to $28 billion, providing a sure way to finance expansion and new hiring. For the first time since 1929, companies began to turn heavily to the capital and bond markets to raise funds. Stock prices surged, and by 1947 shares had appreciated 92 percent.
At the same time, the ingenious productivity that American corporations had shown in making planes, ships, and tanks—steadily lowering their cost year after year—translated into the same efficiency in making consumer goods, from radios and TVs to cars and suburban homes. The growth included raw materials from oil and rubber to aluminum and food stuffs, as the end of wartime rationing and price controls boosted production and lowered costs. The cost of feeding Americans, for example, fell to 10 percent of GDP from 25 percent during the war.
In 1947, the GDP numbers began to reflect the economic explosion under way. The gross domestic product of the United States stood at $231 billion–roughly what it had been in 1945. In 1948, it rose to $258 billion, paused there for 1949, and then bloomed from $285 billion in 1950 to $398 billion in 1955. In the decade and a half after 1945, GNP grew at an average annual rate of 4.75 percent. Unemployment, which had stood at 3.9 percent in 1948, then dropped to 3 percent in 1952 before settling in at 4.1 percent in 1956.
As the tide surged, other boats rose, as well. Median family income in the United States grew by almost 38 percent. And alongside the big corporations that emerged from the war stronger than ever, such as General Motors and U.S. Steel and Westinghouse—or Lockheed and Boeing, which would become pillars of the Cold War military-industrial complex—there were hard-charging entrepreneurs such as Milton Reynolds. His Reynolds Pen Company grew to 400 employees as competition drove the price of a pen from $12.50 to less than 50 cents.
Another was Ray Kroc, a traveling Dixie Cup salesman who in 1955 bought a hamburger stand belonging to two brothers named McDonald. Then there was William Levitt, who transferred the skills he learned in making defense-worker housing in Virginia and Hawaii to suburban homes around the country; and William Boyle, a manager at the Franklin National Bank on Long Island, who in 1951 came up with the idea of the credit card.
Before the 1950s were out, the entrepreneurial wave would reach northern California, with what would come to be known as Silicon Valley.
Here we come to the final myth about the great period of postwar prosperity: that it was fueled by a close cooperation between Big Business and Big Government, aided by deficit spending that “fine-tuned” the economy’s natural upswings and downswings. According to Robert Samuelson’s The Good Life and Its Discontents: “The wartime boom…created an economic and political model that seemed to work….Government and business could collaborate, as they had during the war, to engineer peace and progress.” They did so with gusto in the Eisenhower years. The federal budget did soar: The annual outlay in 1959 was more than double what it had been in 1950. Much of it was military spending for the Cold War, which led many to assume such action was not only good for the free world but good for business, as Boeing and Lockheed and Grumman became pillars of the economy.
Indeed, according to conventional wisdom, so successful was the Eisenhower-era formula of combining government with business investment, from the military-industrial complex to the Interstate Highway system and the race to the moon, that some economists wondered if the business cycle hadn’t finally been abolished—just as they had in the 1920s. America’s leading Keynesian, Paul Samuelson, would write in 1964, “Our mixed economy, wars aside, has a great future before it.”
The truth was very different. In fact, business’s “cooperation” with the federal government in the 50s—some of it, such as arming for the Cold War, urgently necessary—would have the same effect on the economy as driving a sports car with the parking brake on. Indeed, it was a tribute to the underlying strength of the earlier Great Transformation that a series of missteps in the Eisenhower years didn’t grind the process to a halt.
The first cloud on the horizon was the sudden burst in military and government spending, triggered by the Korean War. It was World War II all over again, except that this time the diversion of economic resources to non-economic purposes would last through the 1950s and beyond. In 1950, the coming of war pushed federal outlays up from $38 billion a year (less than two-fifths of what they had been in 1945) to $42 billion, creating a budget deficit for the first time in four years. Deficits would continue to pockmark the Eisenhower years (1952, 1953, 1954, 1955, 1958), with the worst coming in 1959, when the federal government went almost $13 billion in the red. The rise in deficits in the Eisenhower years was matched by a rise in recessions, which hit in 1953–54, 1957–58, and 1960–61.
Fortunately, a still growing economy was able to supply plenty of guns as well as butter. But the steady rise of other kinds of domestic spending at the same time, including Social Security, signaled what was coming. President Eisenhower’s military-industrial complex, which he worried would become the principal driver of runaway federal spending, would soon be dwarfed by domestic entitlements.
The standard view is that one of the offshoots of the military-industrial complex of the 1950s was a range of technologies, from transistors and jet propulsion to satellites and computers, that spurred innovation in the private sector. Perhaps. But the fact remains that during the 1950s, when America’s electronics firms were firmly focused on supporting the defense sector, the growth of technological advance lagged behind Germany’s and above all Japan’s, both of whose electronics industries came roaring back from the war’s aftermath in the same period. The launching of Sputnik in 1957 revealed that something was very wrong with America’s supposed technological edge, despite the arms and space race. Overall, as George Gilder has written, “The 1950s were a disastrous period for American technology and economic advance.”
Far from “fine-tuning” the economy, rising federal spending sucked away resources that might have gone into extending and deepening economic growth. Such growth might have helped groups such as African Americans and non-union workers, who continued to lag behind their white and unionized counterparts. Instead, taxes rose—in the case of the top marginal rate, to 97 percent—to try to constrain the spreading rash of deficit spending.
Even so, it was impossible to take away what had been achieved in reviving American productivity and transforming material life. “In 1946, we did not have a car, a television set, or a refrigerator,” reported the son of one Johnstown, Pennsylvania, steelworker. “In 1952, we had all those things.” What would sustain this standard of living weren’t the big corporate firms, the Fords and General Motors and U.S. Steels and AT&Ts. Increasingly hampered by rising costs, internal micromanagement, and government regulation, these firms could see by 1960 the end of their heyday.
It was instead the undercapitalized start-ups, especially in areas such as consumer electronics where government intervention was almost nonexistent and the rewards for rapid innovation were large, that would point the way to the next stage in the Great American Transformation. It was in 1955, the same year Ray Kroc bought McDonalds, that physicist William Shockley left his employer at Bell Labs to set up his own firm in a shed in Palo Alto, California, to make his first semiconductors. In 1958, engineer Jack Kilby turned up for work at a Dallas-based company called Texas Instruments to develop what would become the integrated circuit.
A new consumer revolution was coming: the computer revolution. It would come under the radar, overshadowed by world and macroeconomic events, just as the Great American Transformation was ending. The most prosperous society on earth was about to extend its powers in ways no one could ever have foreseen.
How did the Great American Transformation take place? Not by government intervention or investment or fine-tuning—or through other modalities Keynesian-minded economic advisers have tried to reproduce, most recently in the Obama $800 billion stimulus.
It was, instead, a threefold combination of renewing economic productivity, imposing tax cuts, and encouraging business savings capitalized as investment—together with the bias toward entrepreneurship and innovation that has always characterized the American economy. It needed its Milton Reynoldses and Ray Krocs, as well as its GMs and U.S. Steels, and in the end the forces that fueled the former have proved more durable than the ones that undergirded the latter.
“You didn’t build that,” President Obama notoriously said in July to America’s entrepreneurs. No, actually, these men did. And men and women like them can do it again if Washington doesn’t get in the way of the next Great Transformation.
Choose your plan and pay nothing for six Weeks!
How America Got Rich
Must-Reads from Magazine
Justice both delayed and denied.
According to Senate Judiciary Committee Democrat Chris Coons, Dr. Christine Blasey Ford, the woman who has accused Judge Brett Kavanaugh of sexually assaulting her when she was a minor, did not want to come forward. In an eerie echo of Anita Hill’s public ordeal, her accusations were “leaked to the media.” With her confidentiality violated, Ford had no choice but to go public. Coons could not say where that leak came from, but he did confess that “people on committee staff” had access to the letter in which Ford made her allegations. Draw your own conclusions.
Though many observers insist that what we have witnessed since Ford’s allegations were made public is about justice, it’s hard to see any rectitude in this process. Ford has been transformed into a public figure apparently against her wishes. The details of the attack that Ford alleges are deeply disturbing, but they are not prosecutable. Ford’s recollection of the events 36 years ago is understandably hazy, but what she alleges to have occurred is too vague to establish with much accuracy. She cannot recall the precise date or location in which she was supposedly attacked. Contrary to the protestations of Senate Democrats like Kamala Harris, the FBI cannot get involved in a matter that is not within the federal government’s jurisdiction. And even if local authorities were inclined to involve themselves, the statute of limitations long ago elapsed.
With precious few facts available to congressional investigators and without the sobriety that public scrutiny in the age of social media abhors, the spectacle to which the nation is about to be privy is undoubtedly going to make things worse. A public hearing featuring both Ford and Kavanaugh will be a performative and political display, if it happens at all. It will be adorned with the trappings of courtroom proceedings but with none of the associated protections afforded accused and accuser alike. It will further polarize the nation such that, whether Kavanaugh is confirmed or not, public confidence in Congress and the Supreme Court will be severely damaged. And no matter what is said in that hearing, it is unlikely to change many minds.
Given the dearth of hard evidence, it is understandable that observers have begun to look to their own experiences to evaluate the veracity of Ford’s allegations. The Atlantic contributor Caitlin Flanagan is the author of a powerful and compelling example of this kind of work. Her essay, entitled “I Believe Her,” is important for a variety of reasons. Maybe foremost among them is how she all but invalidates defenses of Kavanaugh that are based on the positive character references he’s assembled from former female acquaintances and ex-girlfriends. Flanagan was assaulted as a young woman, and her abuser—a man she says drove her to a suicidal depression similar to what Ford has described to her therapist—was not interested in a romantic relationship. CNN political commenter Symone Sanders, too, confessed that “there is no debate” in her mind as to Kavanaugh’s guilt, in part, because she was the victim of a sexual assault in college. The similarities between what she endured and what Ford says occurred are too hard for her to ignore.
These are harrowing stories, but they also reveal how little any of this has to do with Brett Kavanaugh anymore. For some, this has become a proxy battle in the broader cultural reckoning that began with the #MeToo moment. Quite unlike the many abusive men who were outed by this movement, though, the evidentiary standard being applied to Kavanaugh’s case is remarkably low. His innocence has not been presumed, and a preponderance of evidence has not been marshaled against him. It is not even clear as of this writing that Kavanaugh will be allowed to confront his accuser. At a certain point, honest observers must concede that getting to the truth has not been a defining feature of this process.
In the face of this adversity, there are some Republicans who are willing to sacrifice Kavanaugh’s nomination. Some appear to think that Kavanaugh’s troubles present them with an opportunity to advance their own political prospects and to promote a replacement nominee with whom they feel a closer ideological affinity. Others simply don’t want to risk standing by a tainted nominee. The stakes associated with a lifetime appointment to the Supreme Court are too high to confirm a justice with an asterisk next to his name—a justice who may tarnish future rulings on sensitive cases by association. Those Republicans are either capitulatory or craven.
Based on what we know now, Kavanaugh does not deserve an asterisk. Maybe he will tomorrow, but he doesn’t today. Those who would allow what is by almost all accounts an exemplary legal career to be destroyed by unconfirmable accusations or outright innuendo will not get a better deal down the line. Some Republicans are agnostic about Kavanaugh’s fate and believe that his being stopped will make room for a more doctrinaire conservative like Amy Coney Barrett. But they will not get their ideologically simpatico justice if they allow the defiling of the process by which she could be confirmed.
The experiences that Dr. Ford described are appalling. Even for those who are inclined to believe her account and think that she is due some restitution, no true justice can be meted out that doesn’t infringe on the rights of the accused. Those in the commentary class who would use Kavanaugh as a stand-in for every abuser who got away, every preppy white boy who benefited from unearned privilege, every hypocritical conservative moralizer to exact some karmic vengeance are not interested in justice. They want a political victory, even at the expense of the integrity of the American ideal. If there is a fight worth having, it’s the fight against that.
Choose your plan and pay nothing for six Weeks!
Terror is a choice.
Ari Fuld described himself on Twitter as a marketer and social media consultant “when not defending Israel by exposing the lies and strengthening the truth.” On Sunday, a Palestinian terrorist stabbed Fuld at a shopping mall in Gush Etzion, a settlement south of Jerusalem. The Queens-born father of four died from his wounds, but not before he chased down his assailant and neutralized the threat to other civilians. Fuld thus gave the full measure of devotion to the Jewish people he loved. He was 45.
The episode is a grim reminder of the wisdom and essential justice of the Trump administration’s tough stance on the Palestinians.
Start with the Taylor Force Act. The act, named for another U.S. citizen felled by Palestinian terror, stanched the flow of American taxpayer fund to the Palestinian Authority’s civilian programs. Though it is small consolation to Fuld’s family, Americans can breathe a sigh of relief that they are no longer underwriting the PA slush fund used to pay stipends to the family members of dead, imprisoned, or injured terrorists, like the one who murdered Ari Fuld.
No principle of justice or sound statesmanship requires Washington to spend $200 million—the amount of PA aid funding slashed by the Trump administration last month—on an agency that financially induces the Palestinian people to commit acts of terror. The PA’s terrorism-incentive budget—“pay-to-slay,” as Douglas Feith called it—ranges from $50 million to $350 million annually. Footing even a fraction of that bill is tantamount to the American government subsidizing terrorism against its citizens.
If we don’t pay the Palestinians, the main line of reasoning runs, frustration will lead them to commit still more and bloodier acts of terror. But U.S. assistance to the PA dates to the PA’s founding in the Oslo Accords, and Palestinian terrorists have shed American and Israeli blood through all the years since then. What does it say about Palestinian leaders that they would unleash more terror unless we cross their palms with silver?
President Trump likewise deserves praise for booting Palestinian diplomats from U.S. soil. This past weekend, the State Department revoked a visa for Husam Zomlot, the highest-ranking Palestinian official in Washington. The State Department cited the Palestinians’ years-long refusal to sit down for peace talks with Israel. The better reason for expelling them is that the label “envoy” sits uneasily next to the names of Palestinian officials, given the links between the Palestine Liberation Organization, President Mahmoud Abbas’s Fatah faction, and various armed terrorist groups.
Fatah, for example, praised the Fuld murder. As the Jerusalem Post reported, the “al-Aqsa Martyrs Brigades, the military wing of Fatah . . . welcomed the attack, stressing the necessity of resistance ‘against settlements, Judaization of the land, and occupation crimes.’” It is up to Palestinian leaders to decide whether they want to be terrorists or statesmen. Pretending that they can be both at once was the height of Western folly, as Ari Fuld no doubt recognized.
May his memory be a blessing.
Choose your plan and pay nothing for six Weeks!
The end of the water's edge.
It was the blatant subversion of the president’s sole authority to conduct American foreign policy, and the political class received it with fury. It was called “mutinous,” and the conspirators were deemed “traitors” to the Republic. Those who thought “sedition” went too far were still incensed over the breach of protocol and the reckless way in which the president’s mandate was undermined. Yes, times have certainly changed since 2015, when a series of Republican senators signed a letter warning Iran’s theocratic government that the Joint Comprehensive Plan of Action (aka, the Iran nuclear deal) was built on a foundation of sand.
The outrage that was heaped upon Senate Republicans for freelancing on foreign policy in the final years of Barack Obama’s administration has not been visited upon former Secretary of State John Kerry, though he arguably deserves it. In the publicity tour for his recently published memoir, Kerry confessed to conducting meetings with Iranian Foreign Minister Javad Zarif “three or four times” as a private citizen. When asked by Fox News Channel’s Dana Perino if Kerry had advised his Iranian interlocutor to “wait out” the Trump administration to get a better set of terms from the president’s successor, Kerry did not deny the charge. “I think everybody in the world is sitting around talking about waiting out President Trump,” he said.
Think about that. This is a former secretary of state who all but confirmed that he is actively conducting what the Boston Globe described in May as “shadow diplomacy” designed to preserve not just the Iran deal but all the associated economic relief and security guarantees it provided Tehran. The abrogation of that deal has put new pressure on the Iranians to liberalize domestically, withdraw their support for terrorism, and abandon their provocative weapons development programs—pressures that the deal’s proponents once supported.
“We’ve got Iran on the ropes now,” said former Democratic Sen. Joe Lieberman, “and a meeting between John Kerry and the Iranian foreign minister really sends a message to them that somebody in America who’s important may be trying to revive them and let them wait and be stronger against what the administration is trying to do.” This is absolutely correct because the threat Iran poses to American national security and geopolitical stability is not limited to its nuclear program. The Iranian threat will not be neutralized until it abandons its support for terror and the repression of its people, and that will not end until the Iranian regime is no more.
While Kerry’s decision to hold a variety of meetings with a representative of a nation hostile to U.S. interests is surely careless and unhelpful, it is not uncommon. During his 1984 campaign for the presidency, Jesse Jackson visited the Soviet Union and Cuba to raise his own public profile and lend credence to Democratic claims that Ronald Reagan’s confrontational foreign policy was unproductive. House Speaker Jim Wright’s trip to Nicaragua to meet with the Sandinista government was a direct repudiation of the Reagan administration’s support for the country’s anti-Communist rebels. In 2007, as Bashar al-Assad’s government was providing material support for the insurgency in Iraq, House Speaker Nancy Pelosi sojourned to Damascus to shower the genocidal dictator in good publicity. “The road to Damascus is a road to peace,” Pelosi insisted. “Unfortunately,” replied George W. Bush’s national security council spokesman, “that road is lined with the victims of Hamas and Hezbollah, the victims of terrorists who cross from Syria into Iraq.”
Honest observers must reluctantly conclude that the adage is wrong. American politics does not, in fact, stop at the water’s edge. It never has, and maybe it shouldn’t. Though it may be commonplace, American political actors who contradict the president in the conduct of their own foreign policy should be judged on the policies they are advocating. In the case of Iran, those who seek to convince the mullahs and their representatives that repressive theocracy and a terroristic foreign policy are dead-ends are advancing the interests not just of the United States but all mankind. Those who provide this hopelessly backward autocracy with the hope that America’s resolve is fleeting are, as John Kerry might say, on “the wrong side of history.”
Choose your plan and pay nothing for six Weeks!
Michael Wolff is its Marquis de Sade. Released on January 5, 2018, Wolff’s Fire and Fury became a template for authors eager to satiate the growing demand for unverified stories of Trump at his worst. Wolff filled his pages with tales of the president’s ignorant rants, his raging emotions, his television addiction, his fast-food diet, his unfamiliarity with and contempt for Beltway conventions and manners. Wolff made shocking insinuations about Trump’s mental state, not to mention his relationship with UN ambassador Nikki Haley. Wolff’s Trump is nothing more than a knave, dunce, and commedia dell’arte villain. The hero of his saga is, bizarrely, Steve Bannon, who in Wolff’s telling recognized Trump’s inadequacies, manipulated him to advance a nationalist-populist agenda, and tried to block his worst impulses.
Wolff’s sources are anonymous. That did not slow down the press from calling his accusations “mind-blowing” (Mashable.com), “wild” (Variety), and “bizarre” (Entertainment Weekly). Unlike most pornographers, he had a lesson in mind. He wanted to demonstrate Trump’s unfitness for office. “The story that I’ve told seems to present this presidency in such a way that it says that he can’t do this job, the emperor has no clothes,” Wolff told the BBC. “And suddenly everywhere people are going, ‘Oh, my God, it’s true—he has no clothes.’ That’s the background to the perception and the understanding that will finally end this, that will end this presidency.”
Nothing excites the Resistance more than the prospect of Trump leaving office before the end of his term. Hence the most stirring examples of Resistance Porn take the president’s all-too-real weaknesses and eccentricities and imbue them with apocalyptic significance. In what would become the standard response to accusations of Trumpian perfidy, reviewers of Fire and Fury were less interested in the truth of Wolff’s assertions than in the fact that his argument confirmed their preexisting biases.
Saying he agreed with President Trump that the book is “fiction,” the Guardian’s critic didn’t “doubt its overall veracity.” It was, he said, “what Mailer and Capote once called a nonfiction novel.” Writing in the Atlantic, Adam Kirsch asked: “No wonder, then, Wolff has written a self-conscious, untrustworthy, postmodern White House book. How else, he might argue, can you write about a group as self-conscious, untrustworthy, and postmodern as this crew?” Complaining in the New Yorker, Masha Gessen said Wolff broke no new ground: “Everybody” knew that the “president of the United States is a deranged liar who surrounded himself with sycophants. He is also functionally illiterate and intellectually unsound.” Remind me never to get on Gessen’s bad side.
What Fire and Fury lacked in journalistic ethics, it made up in receipts. By the third week of its release, Wolff’s book had sold more than 1.7 million copies. His talent for spinning second- and third-hand accounts of the president’s oddity and depravity into bestselling prose was unmistakable. Imitators were sure to follow, especially after Wolff alienated himself from the mainstream media by defending his innuendos about Haley.
It was during the first week of September that Resistance Porn became a competitive industry. On the afternoon of September 4, the first tidbits from Bob Woodward’s Fear appeared in the Washington Post, along with a recording of an 11-minute phone call between Trump and the white knight of Watergate. The opposition began panting soon after. Woodward, who like Wolff relies on anonymous sources, “paints a harrowing portrait” of the Trump White House, reported the Post.
No one looks good in Woodward’s telling other than former economics adviser Gary Cohn and—again bizarrely—the former White House staff secretary who was forced to resign after his two ex-wives accused him of domestic violence. The depiction of chaos, backstabbing, and mutual contempt between the president and high-level advisers who don’t much care for either his agenda or his personality was not so different from Wolff’s. What gave it added heft was Woodward’s status, his inviolable reputation.
“Nothing in Bob Woodward’s sober and grainy new book…is especially surprising,” wrote Dwight Garner at the New York Times. That was the point. The audience for Wolff and Woodward does not want to be surprised. Fear is not a book that will change minds. Nor is it intended to be. “Bob Woodward’s peek behind the Trump curtain is 100 percent as terrifying as we feared,” read a CNN headline. “President Trump is unfit for office. Bob Woodward’s ‘Fear’ confirms it,” read an op-ed headline in the Post. “There’s Always a New Low for the Trump White House,” said the Atlantic. “Amazingly,” wrote Susan Glasser in the New Yorker, “it is no longer big news when the occupant of the Oval Office is shown to be callous, ignorant, nasty, and untruthful.” How could it be, when the press has emphasized nothing but these aspects of Trump for the last three years?
The popular fixation with Trump the man, and with the turbulence, mania, frenzy, confusion, silliness, and unpredictability that have surrounded him for decades, serves two functions. It inoculates the press from having to engage in serious research into the causes of Trump’s success in business, entertainment, and politics, and into the crises of borders, opioids, stagnation, and conformity of opinion that occasioned his rise. Resistance Porn also endows Trump’s critics, both external and internal, with world-historical importance. No longer are they merely journalists, wonks, pundits, and activists sniping at a most unlikely president. They are politically correct versions of Charles Martel, the last line of defense preventing Trump the barbarian from enacting the policies on which he campaigned and was elected.
How closely their sensational claims and inflated self-conceptions track with reality is largely beside the point. When the New York Times published the op-ed “I am Part of the Resistance Inside the Trump Administration,” by an anonymous “senior official” on September 5, few readers bothered to care that the piece contained no original material. The author turned policy disagreements over trade and national security into a psychiatric diagnosis. In what can only be described as a journalistic innovation, the author dispensed with middlemen such as Wolff and Woodward, providing the Times the longest background quote in American history. That the author’s identity remains a secret only adds to its prurient appeal.
“The bigger concern,” the author wrote, “is not what Mr. Trump has done to the presidency but what we as a nation have allowed him to do to us.” Speak for yourself, bud. What President Trump has done to the Resistance is driven it batty. He’s made an untold number of people willing to entertain conspiracy theories, and to believe rumor is fact, hyperbole is truth, self-interested portrayals are incontrovertible evidence, credulity is virtue, and betrayal is fidelity—so long as all of this is done to stop that man in the White House.
Choose your plan and pay nothing for six Weeks!
Review of 'Stanley Kubrick' By Nathan Abrams
Except for Stanley Donen, every director I have worked with has been prone to the idea, first propounded in the 1950s by François Truffaut and his tendentious chums in Cahiers du Cinéma, that directors alone are authors, screenwriters merely contingent. In singular cases—Orson Welles, Michelangelo Antonioni, Woody Allen, Kubrick himself—the claim can be valid, though all of them had recourse, regular or occasional, to helping hands to spice their confections.
Kubrick’s variety of topics, themes, and periods testifies both to his curiosity and to his determination to “make it new.” Because his grades were not high enough (except in physics), this son of a Bronx doctor could not get into colleges crammed with returning GIs. The nearest he came to higher education was when he slipped into accessible lectures at Columbia. He told me, when discussing the possibility of a movie about Julius Caesar, that the great classicist Moses Hadas made a particularly strong impression.
While others were studying for degrees, solitary Stanley was out shooting photographs (sometimes with a hidden camera) for Look magazine. As a movie director, he often insisted on take after take. This gave him choices of the kind available on the still photographer’s contact sheets. Only Peter Sellers and Jack Nicholson had the nerve, and irreplaceable talent, to tell him, ahead of shooting, that they could not do a particular scene more than two or three times. The energy to electrify “Mein Führer, I can walk” and “Here’s Johnny!” could not recur indefinitely. For everyone else, “Can you do it again?” was the exhausting demand, and it could come close to being sadistic.
The same method could be applied to writers. Kubrick might recognize what he wanted when it was served up to him, but he could never articulate, ahead of time, even roughly what it was. Picking and choosing was very much his style. Cogitation and opportunism went together: The story goes that he attached Strauss’s Blue Danube to the opening sequence of 2001 because it happened to be playing in the sound studio when he came to dub the music. Genius puts chance to work.
Until academics intruded lofty criteria into cinema/film, the better to dignify their speciality, Alfred Hitchcock’s attitude covered most cases: When Ingrid Bergman asked for her motivation in walking to the window, Hitch replied, fatly, “Your salary.” On another occasion, told that some scene was not plausible, Hitch said, “It’s only a movie.” He did not take himself seriously until the Cahiers du Cinéma crowd elected to make him iconic. At dinner, I once asked Marcello Mastroianni why he was so willing to play losers or clowns. Marcello said, “Beh, cinema non e gran’ cosa” (cinema is no big deal). Orson Welles called movie-making the ultimate model-train set.
That was then; now we have “film studies.” After they moved in, academics were determined that their subject be a very big deal indeed. Comedy became no laughing matter. In his monotonous new book, the film scholar Nathan Abrams would have it that Stanley Kubrick was, in essence, a “New York Jewish intellectual.” Abrams affects to unlock what Stanley was “really” dealing with, in all his movies, never mind their apparent diversity. It is declared to be, yes, Yiddishkeit, and in particular, the Holocaust. This ground has been tilled before by Geoffrey Cocks, when he argued that the room numbers in the empty Overlook Hotel in The Shining encrypted references to the Final Solution. Abrams would have it that even Barry Lyndon is really all about the outsider seeking, and failing, to make his awkward way in (Gentile) Society. On this reading, Ryan O’Neal is seen as Hannah Arendt’s pariah in 18th-century drag. The movie’s other characters are all engaged in the enjoyment of “goyim-naches,” an expression—like menschlichkayit—he repeats ad nauseam, lest we fail to get the stretched point.
Theory is all when it comes to the apotheosis of our Jew-ridden Übermensch. So what if, in order to make a topic his own, Kubrick found it useful to translate its logic into terms familiar to him from his New York youth? In Abrams’s scheme, other mundane biographical facts count for little. No mention is made of Stanley’s displeasure when his 14-year-old daughter took a fancy to O’Neal. The latter was punished, some sources say, by having Barry’s voiceover converted from first person so that Michael Hordern would displace the star as narrator. By lending dispassionate irony to the narrative, it proved a pettish fluke of genius.
While conning Abrams’s volume, I discovered, not greatly to my chagrin, that I am the sole villain of the piece. Abrams calls me “self-serving” and “unreliable” in my accounts of my working and personal relationship with Stanley. He insinuates that I had less to do with Eyes Wide Shut than I pretend and that Stanley regretted my involvement. It is hard for him to deny (but convenient to omit) that, after trying for some 30 years to get a succession of writers to “crack” how to do Schnitzler’s Traumnovelle, Kubrick greeted my first draft with “I’m absolutely thrilled.” A source whose anonymity I respect told me that he had never seen Stanley so happy since the day he received his first royalty check (for $5 million) for 2001. No matter.
Were Abrams (the author also of a book as hostile to Commentary as this one is to me) able to put aside his waxed wrath, he might have quoted what I reported in my memoir Eyes Wide Open to support his Jewish-intellectual thesis. One day, Stanley asked me what a couple of hospital doctors, walking away with their backs to the camera, would be talking about. We were never going to hear or care what it was, but Stanley—at that early stage of development—said he wanted to know everything. I said, “Women, golf, the stock market, you know…”
“Couple of Gentiles, right?”
“That’s what you said you wanted them to be.”
“Those people, how do we ever know what they’re talking about when they’re alone together?”
“Come on, Stanley, haven’t you overheard them in trains and planes and places?”
Kubrick said, “Sure, but…they always know you’re there.”
If he was even halfway serious, Abrams’s banal thesis that, despite decades of living in England, Stanley never escaped the Old Country, might have been given some ballast.
Now, as for Stanley Kubrick’s being an “intellectual.” If this implies membership in some literary or quasi-philosophical elite, there’s a Jewish joke to dispense with it. It’s the one about the man who makes a fortune, buys himself a fancy yacht, and invites his mother to come and see it. He greets her on the gangway in full nautical rig. She says, “What’s with the gold braid already?”
“Mama, you have to realize, I’m a captain now.”
She says, “By you, you’re a captain, by me, you’re a captain, but by a captain, are you a captain?”
As New York intellectuals all used to know, Karl Popper’s definition of bad science, and bad faith, involves positing a theory and then selecting only whatever data help to furnish its validity. The honest scholar makes it a matter of principle to seek out elements that might render his thesis questionable.
Abrams seeks to enroll Lolita in his obsessive Jewish-intellectual scheme by referring to Peter Arno, a New Yorker cartoonist whom Kubrick photographed in 1949. The caption attached to Kubrick’s photograph in Look asserted that Arno liked to date “fresh, unspoiled girls,” and Abrams says this “hint[s] at Humbert Humbert in Lolita.” Ah, but Lolita was published, in Paris, in 1955, six years later. And how likely is it, in any case, that Kubrick wrote the caption?
The film of Lolita is unusual for its garrulity. Abrams’s insistence on the sinister Semitic aspect of both Clare Quilty and Humbert Humbert supposedly drawing Kubrick like moth to flame is a ridiculous camouflage of the commercial opportunism that led Stanley to seek to film the most notorious novel of the day, while fudging its scandalous eroticism.
That said, in my view, The Killing, Paths of Glory, Barry Lyndon, and Clockwork Orange were and are sans pareil. The great French poet Paul Valéry wrote of “the profundity of the surface” of a work of art. Add D.H. Lawrence’s “never trust the teller, trust the tale,” and you have two authoritative reasons for looking at or reading original works of art yourself and not relying on academic exegetes—especially when they write in the solemn, sometimes ungrammatical style of Professor Abrams, who takes time out to tell those of us at the back of his class that padre “is derived from the Latin pater.”
Abrams writes that I “claim” that I was told to exclude all overt reference to Jews in my Eyes Wide Shut screenplay, with the fatuous implication that I am lying. I am again accused of “claiming” to have given the name Ziegler to the character played by Sidney Pollack, because I once had a (quite famous) Hollywood agent called Evarts Ziegler. So I did. The principal reason for Abrams to doubt my veracity is that my having chosen the name renders irrelevant his subsequent fanciful digression on the deep, deep meanings of the name Ziegler in Jewish lore; hence he wishes to assign the naming to Kubrick. Pop goes another wished-for proof of Stanley’s deep and scholarly obsession with Yiddishkeit.
Abrams would be a more formidable enemy if he could turn a single witty phrase or even abstain from what Karl Kraus called mauscheln, the giveaway jargon of Jewish journalists straining to pass for sophisticates at home in Gentile circles. If you choose, you can apply, on line, for screenwriting lessons from Nathan Abrams, who does not have a single cinematic credit to his name. It would be cheaper, and wiser, to look again, and then again, at Kubrick’s masterpieces.