His wit all see-saw, between that and this, Now high, now low, now master up, now miss. And he himself…
His wit all see-saw, between that and this, Now high, now low, now master up, now miss. And he himself one vile antithesis.
“Epistle to Dr. Arbuthnot”
There are certain figures in the arts who, although minor in accomplishment and equivocal in their aesthetic influence, are so completely representative of the spirit of their age that they come to occupy a historical position far greater than the intrinsic merits of their work could ever justify. Their function in cultural life is not so much to create as to incite and impersonate.
Such figures tend therefore to commence their careers as epigones of talents more radical than their own. Often they are drawn to extreme positions that promise deliverance from the conventionality of their origins, but they are not equipped by sensibility or conviction to distinguish between the appearance and the reality of the idols and ideals they choose to emulate. When they feel called upon at opportune moments to attempt originality or innovation—as they inevitably do—they lapse into parody and pastiche, sometimes deliberately, sometimes not.
The métier of such figures is not, in any case, either ideas or artistic creation. It is publicity, showmanship, and the exercise of power. Once they have captured the limelight, they become addicted to it, for the withdrawal of attention is tantamount to oblivion. Their careers thus become a succession of feints and assaults designed to command the public stage and add luster to their visibility—even at the price of obloquy and scandal. In our time this usually means pleasing important people while pretending to offend established taste.
The American architect Philip Johnson is a particularly vexing example of this type. For more than 60 years—he was born in 1906—he has been a formidable presence on the American cultural scene. No other figure of his time—no other architect, certainly—has so successfully combined so many roles in the arts and their institutions. In addition to his work as an architect and architectural publicist, Johnson has been an eminent museum curator (at the Museum of Modern Art), an influential university professor (at Yale), a leading collector and promoter of contemporary art, and a tastemaker and reputation-broker on a huge scale. He has also been an immensely successful businessman, and equally successful, too, as an adviser and mentor to people who are even richer and more powerful than he.
Yet Johnson’s career, though undeniably brilliant, encompasses no single accomplishment that can be cited as a quintessential example of his ideas or his vision. In lieu of a personal aesthetic or a recognizable style or a definitive philosophy—or even, alas, a coherent development—what characterizes his work is a series of brilliantly performed charades in which other people’s ideas, other people’s tastes, and other people’s styles have been appropriated, exploited, deconstructed, and repackaged to advance the prosperity of his own reputation and influence.
One consequence of that influence and the enormous power of patronage and preferment that it commands has been a notable reluctance among architectural critics and others to undertake the kind of frank and comprehensive assessment that is normally devoted to the work of famous figures on the cultural scene. Compared, for example, with the attacks that have been mounted in recent years on the accomplishments of Alfred H. Barr, Jr., the founding director of the Museum of Modern Art who launched Johnson’s career as an architectural expert and art collector, or on the work of Mies van der Rohe, whose ideas exerted the single greatest influence on Johnson’s early architectural thought, Johnson himself has remained remarkably exempt from searching criticism. It is indeed a measure of Johnson’s power that despite the many really bad buildings he has created over the last 30 years or more, and the many ideological hostages to fortune he has given his enemies over the course of his career, he remains in nearly total control of his own reputation. Although he is widely recognized as the bad boy of contemporary architecture—he once proudly proclaimed himself to be an architectural “whore”—an aura of artistic distinction and superior taste has nonetheless insulated Johnson from serious attack.
How is one to explain this odd situation? It is certainly not a question of his being a beloved figure; he is not. No doubt, fear of reprisal accounts for a good deal of the reluctance to challenge the basis of Johnson’s artistic celebrity, but there are other reasons, too. One of them is that he has played an important role in launching the careers of many of those who would now be in a position to write the requisite critiques of his work, and who may well be hesitant to seem lacking in gratitude. Another is that Johnson has proved to be very adept at anticipating such critiques in his own frequent utterances about his work, making effective use of a combination of campy self-mockery and swaggering self-aggrandizement as a means of trivializing the case against him before others have had the opportunity to state it more cogently.
But a more fundamental reason than any of these has been the sweeping destruction of architectural standards which Johnson himself—both as an architect and as an architectural thinker—has made fashionable both within the profession and in the public perception of current architectural practice. This destruction of standards has advanced for a couple of decades now under the rubric of “postmodernism,” which is less a movement than a collective agreement to reduce architecture to the level of kitsch and elevate kitsch to the status of art.
Johnson did not himself create the postmodernist phenomenon—creation, even of this sort, has never been his forte—but he was quick to identify himself with its fortunes and soon became its most prestigious representative. What made the move especially piquant for a sensibility as perverse as Johnson’s was the contradiction it embraced. For in the first phase—the Miesian phase—of his architectural career, Johnson was an ardent champion of the very modernism which the postmodernists were busily maligning. But it is entirely characteristic of Johnson that the second half of his career should have been devoted to destroying the standards that served as a touchstone of quality in the first. Infidelity has long been a distinguishing feature of Johnson’s character, and in this respect the promiscuous shifts of attachment and betrayal in his artistic tastes are an accurate reflection of the way he has lived his life in both the private and public realms.
Owing to the checkered character of that life, it was not to be expected that Johnson would ever authorize a frank, full-scale biography. Indeed, the original agreement that obtained between him and Franz Schulze, the author of Philip Johnson: Life and Work,1 stipulated that the book would not be published in its subject’s lifetime. It was on this assumption that Johnson enthusiastically cooperated in the preparation of the book and encouraged intimate friends and professional associates to do likewise. Yet as the book approached completion and Johnson, despite the frailties of age, continued to command the limelight, he changed his mind about delaying publication. His appetite for publicity undiminished, Johnson gave Schulze the go-ahead, thereby assuring himself of what was likely to be his last sensational fling in the public eye.
Always a shrewd connoisseur of the Zeitgeist, Johnson understood very well that in the debased cultural climate of the 1990’s, even the most scandalous revelations in Schulze’s biography would not only have lost their power to cause him public embarrassment but would, on the contrary, contribute to the glamor of an invincibly chic reputation. And in this hard-headed assessment Johnson has been proved largely correct. For what emerges from Schulze’s biography is, among much else, a portrait of the architect as an immoralist—precisely the kind of immoralist in whose mind a deep-rooted aestheticism combines with a remorseless nihilism to render everything but personal gratification and public approbation utterly meaningless. And that, rather than the admired or despised architect, is the figure who is now being celebrated as he approaches his ninetieth birthday.
Philip Johnson was born to wealth, privilege, and propriety in Cleveland, Ohio, the son of a successful, Harvard-educated lawyer and a reclusive, Wellesley-educated mother whose primary interests, according to Schulze, were “good manners and lofty ideas appropriate to her concept of [the Johnson family’s] station and mission in life”—“a mother of majesty rather than intimacy.” After the death of his older brother when Philip was two years old, he was subjected to what Schulze describes as “an uncommonly protected upbringing.” Neither of his parents was young. When they married in 1901, his father was thirty-nine and already twice widowed; his mother, then thirty-two, had been, in keeping with the conventions of the day, well on her way to spinsterhood. Philip and his sister Theodate, the only member of the family to whom he was ever close, later believed their mother had had a lesbian attachment before her marriage—but this was pure speculation and may have been, on Philip’s part at least, a self-justifying belief.
Homer Johnson was remarkably generous to all of his children, and in the case of his single surviving son he lavished so much wealth upon him that by the time Philip graduated from Harvard he was richer than his father. Still, the father seems never to have known what to make of his brilliant, troubled son, and was fairly obtuse about what he could not avoid knowing. When, for example, he discovered that Philip, while at Harvard, had consulted a Boston neurologist about his homosexuality, he simply declared that “Boys don’t fall in love with boys” and advised his son to forget about it. Many years after Homer Johnson died, in his ninety-eighth year, the son’s judgment on him was characteristically cold and dismissive: “He wasn’t any use in the world.”
His mother, however, was more interested in things of the mind, and although no more adept than her husband at understanding their son, she exerted a far greater influence on his development. “Since she understood more of the mind than of the heart,” Schulze writes, “she looked upon Philip’s special place in the family as an excuse, indeed an inspiration, to design his intellect first and worry over his psyche later, if at all.” It was a recipe for disaster, and the disasters promptly manifested themselves in the form of violent tantrums, abysmal loneliness, and mental breakdown.
Yet however much he may have disliked his mother, she remained Johnson’s principal intellectual confidante until well after his student years. “Their mutuality endured,” Schulze writes, “and she became his favorite correspondent throughout his school years away from home and his early professional life.” At Harvard one of his major concentrations was in Greek, which his mother had taught him as a boy, and her notion of the Johnson family’s “station” in life, which placed them well beyond the mundane standards of ordinary folk, was one that the adult Johnson adopted as a personal credo.
This sense of a special station was greatly abetted by Johnson’s enthusiastic discovery of Nietzsche while he was studying philosophy—his other academic interest—at Harvard. According to Schulze, it was Nietzsche who turned Johnson in the direction of art and politics, and whose ideas can be seen in retrospect to have shaped the immoralist impulses that governed the rest of his life and work.
During most of his undergraduate career at Harvard (protracted to seven years because of episodes of mental breakdown), Johnson seems not to have been much interested in either art or architecture, and indeed he had no very specific notion of what he wanted to do. Except for study at Harvard and travel abroad, his life remained closely tethered to that of his family. (On one of his trips he took to wearing Arab dress and enjoyed what Schulze describes as “his first full-fledged, ‘consummated’ sexual experience” with a guard in the Cairo Museum.) The fortune that Johnson’s father had already settled upon him did not make the choice of a vocation a pressing one, and his father’s predictable suggestion that he study law was easily rejected. So was the offer to teach Greek at Oberlin, where his father was a trustee.
What chanced to give Johnson his first sense of a commanding purpose to which he could harness his considerable intellectual energies was a meeting in 1929 with the twenty-seven-year-old Alfred Barr, who later that year would be named director of the Museum of Modern Art in New York. Barr had given a series of five widely-noticed lectures at Wellesley on the entire history of modernism, from Post-Impressionism to the Bauhaus and Russian Constructivism, and he was on the lookout for recruits to the cause. Johnson had only lately acquired a curiosity about modernist buildings in Europe, but at their first encounter at Wellesley, where Johnson’s sister was a student, Barr appears to have converted him on the spot. Schulze helpfully illuminates the psychological process at work. Johnson, he writes,
had a habit of seeking out authority figures . . . who, as if reenacting [his mother’s] role in his younger life, could take command of his emotional loyalties at the same time they nurtured his intellectual ambitions—could dominate him, that is, as they aggrandized him.
As discussions for the new museum in New York were already in progress, Barr apparently hinted that there might be a place for Johnson on its staff, and gave his eager recruit detailed instructions on what he should see on the European trip he was planning for the summer of 1929. Thus began Johnson’s initiation into the modernist movement that was to dominate the whole first phase of his professional career.
While Johnson was abroad that summer, the announcement of Barr’s appointment was duly made. Johnson could not immediately join the museum staff, however, because he had not yet completed the course requirements for his Harvard degree. This he did in the spring of 1930, meanwhile commuting to New York where he quickly became a member of Barr’s inner circle. By that time he had also teamed up with the architectural historian Henry-Russell Hitchcock, Jr., and together they planned a book that in 1932 was published under the title, The International Style: Architecture Since 1922. This, as Schulze correctly observes, “went on to exert a tremendous impact on architecture worldwide.”
Johnson was distinctly the junior partner in this collaboration, but it nonetheless established him as an authority on a subject that, three years earlier, he had barely been acquainted with. Even more important to his new career was the show, “Modern Architecture: International Exhibition,” which he organized at the Museum of Modern Art in 1932. This was a major event in the campaign to establish modernist architecture on the American scene, and it launched Johnson as a tastemaker in a field which was still highly controversial and in which he was himself still something of an amateur, neither a trained scholar nor a professional practitioner. He proved, however, to be a great showman, and this was an immense advantage in the new museology which Barr introduced to the public with the Museum of Modern Art—a museology in which installation, presentation, and propaganda were to be as important in shaping public response as the objects on display.
Johnson was clearly one of the rising stars in the constellation of talents that Barr was counting on to run the museum in the first decade of its activities. In Barr and his wife Marga, moreover, Johnson found a friendship unlike any he had ever known. They were more than mentors to him; they were “family”—the kind of cosmopolitan, uncensorious family Johnson preferred to his own. In Marga Barr, especially, he found a worldly confidante with whom he could frankly discuss anything, including the vagaries of his now very active sex life. Although more or less contemporaries—Barr was one year older than Johnson—Johnson became, in effect, the Barrs’ adopted “son.” They traveled together, they rented Manhattan apartments in the same building, and they were closely bound to the same mission—the museum and the modernism it espoused. Or so it seemed, anyway, to the Barrs and their circle.
Yet the museum, as it turned out, did not really satisfy Johnson’s inchoate and unappeased Nietzschean ambitions for himself and the world. To fulfill those ambitions, art was deemed to be insufficient. Only politics would do—and politics, as it happened, of a particularly loathsome character.
The turning point came in 1933 with Adolf Hitler’s assumption of power in Germany. Johnson adored Germany. He was fluent in the German language, and well-versed in the avant-garde cultural life that marked the last years of the Weimar Republic. He had taken full advantage, too, of the sexual opportunities it offered to a man of his tastes. That Hitler was the sworn enemy of everything Johnson seemed most to admire in Weimar Germany hardly seemed to matter. There is no other way to put it: Johnson fell in love with the Nazi regime.
The Barrs, who were in Germany when Hitler came to power, were horrified. When they met with Johnson in Europe shortly thereafter, their disagreement was total and intense.
Alfred deplored the takeover [Schulze writes]; Philip was exhilarated by it. Alfred foresaw a brutal repression of freedoms in all walks of German life, leading to an atrophy of national culture as a whole. Philip, remembering the Potsdam rally at which he found himself transfixed by the Nazi spectacle and transported by the charisma of Hitler, saw a “nationale Erhebung” (national resurgence), an amazing restoration of confidence among the German people, who only shortly before had seemed defeated by the Depression.
For Johnson, as Schulze shrewdly observes, “Hitler and the Nazis had color. . . . There was dash to these Nazis: the way they dressed and sang and marched and fought; their impudence, their bravado, the sexuality Philip could not help but project on them.” And the Nazis also met the requisite Nietzschean standard. “He concentrated his attention on the Nietzschean text in Der Wille zur Macht and its thesis that ‘the will to power’ constitutes man’s fundamental motivating force”—a doctrine, as Schulze also points out, that “must have appealed to the elitist view in which Philip had been nurtured since birth.”
In the face of these attractions, which were as much aesthetic and erotic as they were political, the arts were no longer a top priority for Johnson. “If in the arts [Germany] sets the clock back now, it will run all the faster in the future,” he wrote at the time. Even the Bauhaus, which Johnson had acclaimed four years earlier as a new architectural ideal, he now condemned for bearing “irretrievably the stamp of Communism and Marxism.” As for Hitler’s racial doctrines, Johnson never allowed them to interfere with his sex life. In that respect, at least, he remained a follower of the Weimar ethos rather than the Nazi. Thus, writes Schulze, “he took his first serious lover [in New York] in 1934”—at the very moment when he was committing himself to the fascist cause, to which racial theories were central. “Jimmy Daniels was a black cafe singer whom Philip later called the first Mrs. Johnson.”
Still, as a consequence of Johnson’s political conversion, when he returned to New York from Germany he resigned his post at the Museum of Modern Art to devote himself to politics. Then he did something really bizarre for a white homosexual with a black male lover. With a school friend who was working at the museum, Johnson attempted to join up with the Southern political machine led by the demagogic governor of Louisiana, Huey Long.
It was a move widely reported in the New York press at the time. The headline of a story written by Joseph Alsop in the Herald-Tribune read: “Two Quit Modern Art Museum for Sur-Realist Political Venture.” The New York Times reported of Johnson and his colleague that “Recently they became convinced that, after all, abstract art left some major political and economic problems unsolved. Consequently both have turned in their resignations [from the Museum of Modern Art] and will leave as soon as practicable for Louisiana to study the methods of Huey Long.” Once again Alfred Barr was horrified, tried to change Johnson’s mind, and failed.
Needless to say, Huey Long was not interested in Johnson’s services. But Johnson’s next attempt to attach himself to a homegrown version of a Nazi-type movement met with a more enthusiastic response. He discovered the Reverend Charles E. Coughlin, the Roman Catholic priest whose Sunday afternoon radio broadcasts from the Shrine of the Little Flower in Royal Oak, Michigan, commanded an audience of 30 million listeners. By February 1936, Johnson was in direct contact with Coughlin, and though he found the man himself a “crashing bore,” he nonetheless enlisted in his political cause, which at that moment aimed to challenge President Franklin D. Roosevelt in the 1936 election. Johnson went to work for Coughlin’s weekly paper, Social Justice, and was much involved in the single biggest rally of the priest’s political campaign. Schulze writes:
At Riverview Park [in Chicago], an enormous amusement complex on the city’s North Side, a throng estimated by the Chicago Tribune at between 80,000 and 100,000 heard Coughlin speak from a huge platform that Philip had taken special pains to design. It was modeled after the one he had seen used so effectively at the 1932 Nazi rally: “A special stand, bordering on the moderne,” the Tribune reported, “had been created at one end of the field. It provided a glaring white background 50-feet wide and 20-feet high for the solitary figure of the priest.”
Even though the campaign of Coughlin’s Union party flopped, Johnson labored on, attempting to stimulate some kind of political youth movement along the same lines. That, too, flopped.
It was in the aftermath of the Coughlin debacle that Johnson returned to Germany. The highlight of his 1938 journey was the Nazi Parteitag in Nuremberg, marking the fifth year of Hitler’s ascension to power. Johnson characteristically described the Nuremberg rally as “even more staggering” than Wagner’s Ring. He remained unbothered, moreover, by the fact that Hitler had officially declared war on Entartete Kunst—“degenerate art” of precisely the kind he had so recently championed as a member of Alfred Barr’s inner circle at the Museum of Modern Art. As Schulze observes, “the romance of the thing overpowered him.”
When he returned to New York in the winter of 1938-39, Johnson tried to buy the American Mercury, the monthly magazine founded by H.L. Mencken, complaining that “the Jews” had ruined it, and when that, too, fell through, he traveled once again to Europe—this time venturing as far as Poland. “The Polish tour,” writes Schulze, “only reinforced Philip’s preconceptions of the backwardness among the Poles and the Jews, while reminding him of the superiority of German society and the German military force.” When the war started with Hitler’s invasion of Poland, he therefore headed straight for Berlin in order “to prepare for the most exciting episode of his summer: the German Propaganda Ministry had formally invited him to follow the Wehrmacht to the front.”
He owed this opportunity to Father Coughlin’s Social Justice, for which he wrote five articles in the summer and fall of 1939, condemning England, praising the Germans, and dismissing the United States as “the best misinformed nation in the world.” From Johnson’s FBI file, which the Bureau began assembling during the war, Schulze has retrieved a letter, believed to have been written to a friend in December 1939 when Johnson was back in the U.S. This is the key passage:
I was lucky enough to get to be a correspondent so that I could go to the front when I wanted to and so it was that I came again to the country that we had motored through, the towns north of Warsaw. . . . The German green uniforms made the place look gay and happy. There were not many Jews to be seen. We saw Warsaw burn and Modlin being bombed. It was a stirring spectacle.
This is almost too much even for Schulze’s studied detachment. While he makes a feeble attempt at a psychological analysis of Johnson’s behavior—based on some ill-digested ideas out of William James—he is clearly both appalled and bewildered by his own account of this crucial period in Johnson’s life, which is the most complete we have been given so far.
In 1939, Philip Johnson’s romance with Hitler and the Nazis had begun to attract the interest of a journalist named Dale Kramer—no relation to the present writer—who in the October 1940 issue of Harper’s gave an account of Johnson’s political affiliations in an article on “The American Fascists.” The magazine was on the newsstands in September when, according to Johnson’s FBI file, he had arranged a meeting at the German embassy in Washington. It may have been pure coincidence, but, writes Schulze, “three days after the alleged appointment at the embassy in Washington Philip was back in Harvard, enrolled as a student in the school of architecture.” His affair with Nazism was over.
On his return to Harvard, Johnson, at age thirty-three, was no ordinary student. He was older, richer, and more knowledgeable than other students. Moreover, in the decade since he had received his undergraduate degree, the university—or its Graduate School of Design, anyway—had abandoned its traditional curriculum to embrace the modernism which Johnson had championed in the early 1930’s. The book which Johnson and Henry-Russell Hitchcock had published in 1932, The International Style, was now a required text.
This gave him an immense advantage, to say the least. And so did his wealth, which enabled him to design and build for himself a Miesian house in Cambridge while he was still enrolled as a student. To be sure, Johnson could not entirely escape the consequences of his political past, but in the end his fascist involvements proved to be more of a temporary inconvenience and occasional embarrassment than a permanent bar to advancement. This was all the more remarkable in that the new Harvard design faculty was dominated by three refugees from Nazi Germany—Walter Gropius, the founder of the Bauhaus; his Bauhaus colleague Marcel Breuer; and Martin Wagner, the former director of planning for the city of Berlin. But the bad blood that existed between Gropius and Johnson, for example, had much more to do with the latter’s loyalty to Mies van der Rohe than with the former’s love affair with the Nazis. (Years later, when he was in a position to do so with impunity, Johnson dismissively characterized Gropius as “the Warren G. Harding of architecture.”) Breuer, on the other hand, treated Johnson with cordiality and respect.
Meanwhile, Johnson was making some proforma attempts at his own political rehabilitation. When he joined a Harvard Defense Group (a civil-defense organization) in 1941, it fell to the young Arthur Schlesinger, Jr. to dismiss him from the organization on political grounds. An attempt to take a wartime job at the Office of Facts and Figures in Washington likewise failed as soon as the agency got a look at his FBI file. Still, Johnson weathered these and other travails—including a brief stint in the U.S. Army, into which he was drafted in 1943—with remarkable ease, given the strong anti-Nazi sentiments at the time.
When he got out of the army in 1945, with the country still waging war against Japan, he promptly set about mapping the course of his new career. Toward this end, he opened his own architectural office in Manhattan (though he was unlicensed to practice in New York State); and, with the help of Alfred Barr, he insinuated himself into his old position in the architectural department of the Museum of Modern Art. The renewed MOMA connection proved indispensable to his new career as a practicing architect, restoring Johnson to a position of authority in the field even before he had built enough to earn it.
Then, as always, Johnson was not a man—or artist—to allow principle to interfere with opportunity. He straightaway designed a prefabricated house for a Ladies’ Home Journal competition, and arranged for the house to be exhibited at the Modern. “Remarkably,” writes Schulze, “it was a deep bow to the functionalism Philip had so long abhorred.” Even his renewed campaign on behalf of Mies concealed a growing private doubt about the value of the Miesian aesthetic—which did not prevent Johnson from mounting an exhibition devoted to Mies at the Modern or from designing for himself the famous Glass House in New Canaan, Connecticut, a house that is itself little more than a pastiche of the Miesian style.
That house promptly proved to be so egregiously uninhabitable even for an aesthete like Johnson that he at once undertook to build an alternative, anti-Miesian retreat on the same property that would afford him the privacy and amenity which the doctrinaire transparency of the Glass House precluded. In this regard, I recall a lecture Johnson delivered at Harvard in the spring of 1951 (when I was a student there). Asked by a member of the audience if the Glass House was not fundamentally incompatible with the needs of family life, Johnson declared with his customary hauteur that the family should be abolished.
For Johnson, the Glass House of 1949 served a purpose far more important than domestic amenity. More than anything else he had done or would ever do, it established Philip Johnson the architect as a reputed master of the same modernist style upon which Philip Johnson the museum curator and author had conferred a renewed legitimacy and glamor. Whatever his doubts about the failings of the Miesian ideal, and despite the inevitable jokes about “Mies van der Johnson,” the Glass House was a brilliant gambit that completely succeeded in its purpose. To this day, it continues to be regarded as a classic of American modernist architecture, despite the fact that architects as eminent and as different as Frank Lloyd Wright and Mies himself dismissed it with contempt.
The payoffs came right on schedule. In 1950, Johnson was commissioned to design the MOMA annex on West 53rd Street—another exercise in Miesian pastiche—and in 1953 the Abby Aldrich Rockefeller Sculpture Garden. (It was the elder Mrs. Rockefeller who, when told something of Johnson’s Nazi connections, reportedly said that “every young man should be allowed to make one large mistake.” But it is doubtful that Mrs. Rockefeller ever knew the full extent of Johnson’s involvement.) The sculpture garden may be the single most beautiful thing Johnson ever designed, but it is an essay in museum installation, not building design, and serves a strictly aesthetic function. In any case, these MOMA commissions set the stage from which Johnson’s subsequent architectural career was launched.
Not everyone could be expected to be as forgiving as Mrs. Rockefeller, however. The experience of World War II and the impact of the Holocaust remained powerful moral and emotional issues, especially in the intellectual, cultural, and business worlds of New York. This obliged Johnson to ponder, as Schulze writes, “the need . . . to be free of public perception of his kind of political taint.” Success, in other words, required some public gesture of atonement if the stain of anti-Semitism were to be effectively neutralized.
When, therefore, Johnson learned that a Jewish congregation in the suburb of Port Chester, New York, was looking for an architect to design a new synagogue, he seized the opportunity with his characteristic bravado. “His proposal to design [the building] without fee,” writes Schulze, “was hard for the potential clients to resist, coming as it did from an architect, and a New Canaan neighbor, whose professional reputation was growing at about the same rate his political past seemed to be receding in most people’s minds.” The building that Johnson designed was an architectural hodgepodge devoid of aesthetic distinction, but for the architect it served its purpose well enough. It was another brilliantly cynical move, and there was no way for his clients to know that the sculpture he selected for the interior—an abstract metal relief by Ibram Lassaw—was virtually identical with the one that already adorned his secluded New Canaan bedroom, the scene of rituals of a very different sort. Whether or not Johnson intended this as a private joke cannot be known; but it is not the kind of aesthetic irony that would have been lost on him.
It is worth reflecting, in this connection, upon the remarkably large number of prominent Jews who, in the course of Johnson’s architectural career, have made major contributions to his reputation and success. Conspicuous on this roster of art patrons, intellectuals, businessmen, museum trustees, critics, and architects are such luminaries as Lincoln Kirstein, Edward M.M. Warburg, Ronald Lauder, Samuel Bronfman, I.S. Brochstein, Rosamond Bernier, Ada Louise Huxtable, Paul Goldberger, Robert A.M. Stern, and Peter Eisenman. To them should also be added Shimon Peres, who in 1960 arranged for Johnson to design the nuclear reactor at Rehovot in Israel, and Meyer Lansky—“one of the panjandrums of organized crime in America,” as Schulze writes—for whom Johnson was planning to design a huge gambling-casino hotel in Havana when Fidel Castro’s revolution altered their respective schedules.
This is by no means an exhaustive list, but it is sufficient to underline the extraordinary success Johnson achieved in his programmatic effort to induce a kind of collective amnesia or suspension of curiosity about his Nazi past, even among people who had every reason to be alert to such matters. It was not until 1988, when Johnson was eighty-two and, as Schulze says, “probably the most famous figure in the American architectural world,” that the critic Michael Sorkin attempted to awaken a new generation with a scathing article in Spy magazine. But the venue of Sorkin’s attack was itself too disreputable for the article to have much of an impact. At the altitude of eminence which Johnson now occupied, with the press in his pocket, the beau monde at his feet, and the profession lavishing hosannas upon him, he was for all practical purposes beyond the reach of criticism or exposure.
What in retrospect is especially remarkable about these adroit historical maneuvers is that the man who effected them remained at best a very mediocre architect and at times a disgracefully bad one. The art museums he designed for Lincoln, Nebraska; Utica, New York; and both Corpus Christi and Fort Worth, Texas are among the worst such buildings erected in this country in his lifetime—and all the less to be forgiven as coming from a connoisseur canonized by the Museum of Modern Art. None of these buildings is in the same league with a contemporary masterpiece like Louis Kahn’s Kimbell Art Museum in Fort Worth or the building Edward Larrabee Barnes designed for the Walker Art Center in Minneapolis. The New York State Theater, which Johnson designed for New York City Ballet at Lincoln Center, is so undistinguished that many people find it hard to believe it was done by a reputable architect at all; it looks like pure developer’s boilerplate.
These failed buildings, moreover, date from before the time when Johnson developed a conscious policy of socking it to his corporate clients with aggressive displays of imperious bad taste—before, that is, he perpetrated postmodern monstrosities like the AT & T Building and the so-called “Lipstick” tower in New York. For some two decades now, Johnson has been functioning—with what degree of deliberateness, one can only surmise—as the Andy Warhol of American architecture, “quoting” a bit of Gothic here, a bit of Chippendale there, and sporting a display of self-aggrandizing kitsch everywhere, in more or less the same manner in which Warhol famously “quoted” the Campbell Soup label.
Some of this has proved to be too much even for Johnson’s most loyal admirers. The critic Ada Louise Huxtable spoke for many when, in an address to the American Academy of Arts and Sciences in 1980, she condemned buildings like AT & T and the Pittsburgh Plate Glass headquarters in Pittsburgh as “shallow, cerebral design,” “bad pieces of architecture,” and “clever cannibalism.” But the most devastating judgment came in 1976 from an even more surprising quarter, the Museum of Modern Art, which rejected Johnson’s bid to design the most ambitious expansion in its history. Johnson had been, in effect, MOMA’s house architect for a quarter-century, and its principal architectural authority for longer than that; he was now passed over for the commission that would give the museum its architectural identity for many decades to come. (The job went to Cesar Pelli.)
It was tantamount to a divorce, and Johnson suffered what may have been the single biggest disappointment of his career. “Hence his reaction,” Schulze writes, “which was to reverse the intentions he had earlier had for the disposal of his estate and not to leave the museum his collection of painting and sculpture or even his country property in New Canaan.” Still, even this has not quite ended Johnson’s association with the museum, for MOMA is planning to mark his ninetieth birthday in 1996 with a publication and exhibition devoted to the paintings, sculptures, and design objects he has already given the museum since its founding.
What audacities Johnson may be preparing for that occasion is anyone’s guess, but it is a mercy that neither Alfred nor Marga Barr will be around to witness them. On the last pages of Philip Johnson: Life and Work, Schulze brings us up to date on his subject’s current views with an account of an address Johnson delivered last year (1994) at the Austrian Museum of Applied Arts in Vienna:
The subject was Stalinist architecture, and Philip used the occasion not only to demonstrate again that his basic view of the world was as consistent as the styles of his architecture were not, but to affirm an opinion he had seldom had the opportunity to express: his admiration for a boulevard in the former East Berlin that had been the object of almost unanimous scorn among Western critics. It was the notorious Stalinallee, later renamed the Karl-Marx-Allee, still later, after the fall of the Berlin Wall, the Frankfurter Allee. Philip had long found a grandeur in it that others had dismissed as totalitarian pomposity. Now he called it “the new Champs-Elysees,” a product of “romantic daring” and “the dream of the East for monumentality.”
Schulze may well be correct in his observation that “At eighty-eight, [Johnson] was where he was always most comfortable, at the head of the parade of contemporary taste.” But given his history, there are other ways of reading his remarks in Vienna. Coming upon them in his biography, I was reminded of a conversation I had with Marga Barr in the last year of her life. I was then working with her on the preparation of a “Chronicle” of Alfred Barr’s career for publication in the New Criterion. (It was published under the title, “Our Campaigns,” in a special issue of the magazine in the summer of 1987.)
On one of the mornings we had set for a meeting in her apartment, the New York Times published Johnson’s proposed designs for the rehabilitation of the Times Square-42nd Street area. I found them even more wretched than some of the awful things he had already built, and I was eager to know what Marga thought of them. In recounting to me the story of Alfred’s career, she had had frequent occasion to speak of Johnson, and she always did so with fond affection—for the record, so to speak. That morning I asked if she had seen the paper, and she rather glumly acknowledged that she had. I then asked what she thought of the kind of buildings Johnson had lately been designing—and hastened to add that she was under no obligation to discuss the subject if she preferred not to.
In responding to difficult questions, Marga had a way of turning away for a few moments while she composed her thoughts and then facing her interlocutor with a very determined look. This is what she did that morning as she said to me: “I feel about Philip today the way I would feel about a beloved son who had gone into a life of crime.”
It was a hard thing for her to say, but Marga Barr was not in the habit of shirking the truth. After reading Franz Schulze’s life of Philip Johnson, I would say that she had gotten the matter exactly right.
1 Knopf, 465 pp., $30.00.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.Click to write a letter to the editor
Philip Johnson’s Brilliant Career
Must-Reads from Magazine
Their coming-and-going polka—now you see ’im, now you don’t—consumed the first 10 days of March. One week Cohn was in the driver’s seat of U.S. economic policy, steering his boss into a comprehensive overhaul of the tax code and preparing him for a huge disgorgement of taxpayer money to repair some nebulous entity called “our crumbling infrastructure.” The next week Cohn had disappeared and in his place at the president’s side Navarro suddenly materialized. With Navarro’s encouragement, the president unexpectedly announced hefty, world-wobbling tariffs on steel and aluminum imports. At first the financial markets tumbled, and nobody in Washington, including the president’s friends, seemed happy. Nobody, that is, except Navarro, whose Cheshire-cat grin quickly became unavoidable on the alphabet-soup channels of cable news. It’s the perfect place for him, front and center, trying to disentangle the conflicting strands of the president’s economic policy. Far more than Cohn, the president’s newest and most powerful economic adviser is a suitable poster boy for Trumpism, whatever that might be.
So where, the capital wondered, did this Navarro fellow come from? (The question So where did this Cohn guy go? barely lasted a news cycle.) Insiders and political obsessives dimly remembered Navarro from Trump’s presidential campaign. With Wilbur Ross, now the secretary of commerce, Navarro wrote the most articulate brief for the Trump economic plan in the months before the election, which by my reckoning occurred roughly 277 years ago. (Ross is also Navarro’s co-conspirator in pushing the steel tariffs. They’re an Odd Couple indeed: Navarro is well-coiffed and tidy and as smooth as a California anchorman, while Ross is what Barney Fife might have looked like if he’d given up his job as Mayberry’s deputy sheriff and gotten a degree in mortuary science.) The Navarro-Ross paper drew predictable skepticism from mainstream economists and their proxies in the press, particularly its eye-popping claim that Trump’s “trade policy reforms” would generate an additional $1.7 trillion in government revenue over the next 10 years.
Navarro is nominally a professor at University of California, Irvine. His ideological pedigree, like the president’s, is that of a mongrel. After a decade securing tenure by writing academic papers (“A Critical Comparison of Utility-type Ratemaking Methodologies in Oil Pipeline Regulation”), he set his attention on politics. In the 1990s, he earned the distinction of losing four political races in six years, all in San Diego or its surrounding suburbs—one for mayor, another for county supervisor, another for city council. He was a Democrat in those days, as Trump was; he campaigned against sprawl and for heavy environmental regulation. In 1996, he ran for Congress as “The Democrat Newt Gingrich Fears Most.” The TV actor Ed Asner filmed a commercial for him. This proved less helpful than hoped when his Republican opponent reminded voters that a few years earlier, Asner had been a chief fundraiser for the Communist guerrillas in El Salvador.
After that defeat, Navarro got the message and retired from politics. He returned to teaching, became an off-and-on-again Republican, and set about writing financial potboilers, mostly on investment strategies for a world increasingly unreceptive to American leadership. One of them, Death by China (2011), purported to describe the slow but inexorable sapping of American wealth and spirit through Chinese devilry. As it happened, this was Donald Trump’s favorite theme as well. From the beginning of his 40-year public career, Trump has stuck to his insistence that someone, in geo-economic terms, is bullying this great country of his. The identity of the bully has varied over time: In the 1980s, it was the Soviets who, following their cataclysmic implosion, gave way to Japan, which was replaced, after its own economic collapse, by America’s neighbors to the north and south, who have been joined, since the end of the last decade, by China. In Death by China, the man, the moment, and the message came together with perfect timing. Trump loved it.
It’s not clear that he read it, however. Trump is a visual learner, as the educational theorists used to say. He will retain more from Fox and Friends as he constructs his hair in the morning than from a half day buried in a stack of white papers from the Department of Labor. When Navarro decided to make a movie of the book, directed by himself, Trump attended a screening and lustily endorsed it. You can see why. Navarro’s use of animation is spare but compelling; the most vivid image shows a dagger of Asiatic design plunging (up to the hilt and beyond!) into the heart of a two-dimensional map of the U.S., causing the country’s blood to spray wildly across the screen, then seep in rivulets around the world. It’s Wes Cravenomics.
Most of the movie, however, is taken up by talking heads. Nearly everyone of these heads is attached to a left-wing Democrat, a socialist, or, in a couple of instances, an anarchist from the Occupy movement. Watched today, Death by China is a reminder of how lonely—how marginal—the anti-China obsession has been. This is not to its discredit; yesterday’s fringe often becomes today’s mainstream, just as today’s consensus is often disproved by the events of tomorrow. Not so long ago, for instance, the establishment catechism declared that economic liberalization and the prosperity it created led inexorably to political liberalization; from free markets, we were told, came free societies. In the last generation, China has put this fantasy to rest. Only the willfully ignorant would deny that the behavior of the Chinese government, at home and abroad, is the work of swine. Even so, the past three presidents have seen China only as a subject for scolding, never retaliation.
And this brings us to another mystery of Trumpism, as Navarro embodies it. Retaliation against China and its bullying trade practices is exactly what Trump has promised as both candidate and president. More than a year into his presidency, with his tariffs on steel and aluminum, he has struck against the bullies at last, just as he vowed to do. And the bullies, we discover, are mostly our friends—Germans, Brazilians, South Koreans, and other partners who sell us their aluminum and steel for less than we can make it ourselves. Accounting for 2 percent of U.S. steel imports, the Chinese are barely scratched in the president’s first great foray in protectionism.
In announcing the tariffs, Trump cited Chinese “dumping,” as if out of habit. Yet Navarro himself seems at a loss to explain why he and his boss have chosen to go after our friends instead of our preeminent adversary in world trade. “China is in many ways the root of the problem for all countries of the world in aluminum and steel,” he told CNN the day after the tariffs were announced. Really? How’s that? “The bigger picture is, China has tremendous overcapacity in both aluminum and steel. So what they do is, they flood the world market, and this trickles down to our shores, and to other countries.”
If that wasn’t confusing enough, we had only to wait three days. By then Navarro was telling other interviewers, “This has nothing to do with China, directly or indirectly.”
This is not the first time Trumpism has shown signs of incoherence. With Peter Navarro at the president’s side, and with Gary Cohn a fading memory, it is unlikely to be the last.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Review of 'Political Tribes' By Amy Chua
Amy Chua has an explanation for what ails us at home and abroad: Elites keep ignoring the primacy of tribalism both in the United States and elsewhere and so are blindsided every time people act in accordance with their group instinct. In Political Tribes, she offers a survey of tribal dynamics around the globe and renders judgments about the ways in which the United States has serially misread us-and-them conflicts. In the book’s final chapters, Chua, a Yale University law professor best known for her parenting polemic Battle Hymn of the Tiger Mother, focuses on the clashing group instincts that now threaten to sunder the American body politic.
As Chua sees it, “our blindness to political tribalism abroad reflects America at both its best and worst.” Because the United States is a nation made up of diverse immigrant populations—a “supergroup”—Americans can sometimes underestimate how hard it is for people in other countries to set aside their religious or ethnic ties and find common national purpose. That’s American ignorance in its most optimistic and benevolent form. But then there’s the more noxious variety: “In some cases, like Vietnam,” she writes, “ethnically blind racism has been part of our obliviousness.”
During the Vietnam War, Chua notes, the United States failed to distinguish between the ethnically homogeneous Vietnamese majority and the Chinese minority who were targets of mass resentment. In Vietnam, national identity was built largely on historical accounts of the courageous heroes who had been repelling Chinese invaders since 111 b.c.e., when China first conquered its neighbor to the south. This defining antipathy toward the Chinese was exacerbated by the fact that Vietnam’s Chinese minority was on average far wealthier and more politically powerful than the ethnic Vietnamese masses. “Yet astonishingly,” writes Chua, “U.S. foreign policy makers during the Cold War were so oblivious to Vietnamese history that they thought Vietnam was China’s pawn—merely ‘a stalking horse for Beijing in Southeast Asia.’”
Throughout the book, Chua captures tribal conflicts in clear and engrossing prose. But as a guide to foreign policy, one gets the sense that her emphasis on tribal ties might not be able to do all the work she expects of it. The first hint comes in her Vietnam analysis. If American ignorance of Chinese–Vietnam tensions is to blame for our having fought and lost the war, what would a better understanding of such things have yielded? She gets to that, sort of. “Could we have supported Ho [Chi Minh] against the French, capitalizing on Vietnam’s historical hostility toward China to keep the Vietnamese within our sphere of influence?” Chua asks. “We’ll never know. Somehow we never saw or took seriously the enmity between Vietnam and China.” It’s hard to see the U.S.’s backing a mass-murdering Communist against a putatively democratic ally as anything but a surreal thought experiment, let alone a lost opportunity.
On Afghanistan, Chua is correct about a number of things. There are indeed long-simmering tensions between Pashtuns, Punjabs, and other tribes in the region. The U.S. did pay insufficient attention to Afghanistan in the decade leading up to 9/11. The Taliban did play on Pashtun aspirations to fuel their rise. But how, exactly, are we to understand our failures in Afghanistan as resulting from ignorance of tribal relations? The Taliban went on to forge a protective agreement with al-Qaeda that had little if anything to do with tribal ties. And it was that relationship that had tragic consequences for the United States.
Not only was Osama bin Laden not Pashtun; he was an Arab millionaire, and his terrorist organization was made up of jihadists from all around the world. If anything, it was Bin Laden’s trans-tribal movement that the U.S. should have been focused on. The Taliban-al-Qaeda alliance was based on pooling resources against perceived common threats, compatible (but not identical) religious notions, and large cash payments from Bin Laden. No American understanding of tribal relations could have interfered with that.
And while an ambitious tribe-savvy counterinsurgency strategy might have gone a long way in helping the U.S.’s war effort, there has never been broad public support for such a commitment. Ultimately, our problems in Afghanistan have less to do with neglecting tribal politics and more to do with general neglect.
In Chua’s chapter on the Iraq War, however, her paradigm aligns more closely with the facts. “Could we have done better if we hadn’t been so blind to tribal politics in Iraq?” she asks. “There’s very good evidence that the answer is yes.” Here Chua offers a concise account of the U.S.’s successful 2007 troop surge. “While the additional U.S. soldiers—sent primarily to Baghdad and Al Anbar Province—were of course a critical factor,” she writes, “the surge succeeded only because it was accompanied by a 180-degree shift in our approach to the local population.”
Chua goes into colorful detail about then colonel H.R. McMaster’s efforts to educate American troops in local Iraqi customs and his decision to position them among the local population in Tal Afar. This won the trust of Iraqis who were forthcoming with critical intelligence. She also covers the work of Col. Sean MacFarland who forged relationships with Sunni sheikhs. Those sheikhs, in turn, convinced their tribespeople to work with U.S. forces and function as a local police force. Finally, Chua explains how Gen. David Petraeus combined the work of McMaster and MacFarland and achieved the miraculous in pacifying Baghdad. In spite of U.S. gains—and the successful navigation of tribes—there was little American popular will to keep Iraq on course and, over the next few years, the country inevitably unraveled.I n writing about life in the United States, Chua is on firmer ground altogether, and her diagnostic powers are impressive. “It turns out that in America, there’s a chasm between the tribal identities of the country’s haves and have-nots,” she writes, “a chasm of the same kind wreaking political havoc in many developing and non-Western countries.” In the U.S., however, there’s a crucial difference to this dynamic, and Chua puts her finger right on it: “In America, it’s the progressive elites who have taken it upon themselves to expose the American Dream as false. This is their form of tribalism.”
She backs up this contention with statistics. Some of the most interesting revelations have to do with the Occupy movement. In actual fact, those who gathered in cities across the country to protest systemic inequality in 2012 were “disproportionately affluent.” In fact, “more than half had incomes of $75,000 or more.” Occupy faded away, as she notes, because it “attracted so few members from the many disadvantaged groups it purported to be fighting for.” Chua puts things in perspective: “Imagine if the suffragette movement hadn’t included large numbers of women, or if the civil-rights movement included very few African Americans, or if the gay-rights movement included very few gays.” America’s poorer classes, for their part, are “deeply patriotic, even if they feel they’re losing the country to distant elites who know nothing about them.”
Chua is perceptive on both the inhabitants of Trump Country and the elites who disdain them. She takes American attitudes toward professional wrestling as emblematic of the split between those who support Donald Trump and those who detest him. Trump is a bona fide hero in the world of pro wrestling; he has participated in “bouts” and was actually inducted into the WWE Hall of Fame in 2013. What WWE fans get from watching wrestling they also get from watching Trump—“showmanship and symbols,” a world held together by enticing false storylines, and, ultimately, “something playfully spectacular.” Those on the academic left, on the other hand, “are fascinated, even obsessed in a horrified way, with the ‘phenomenology’ of watching professional wrestling.” In the book’s most arresting line, Chua writes that “there is now so little interaction, commonality, and intermarriage between rural/heartland/working-class whites and urban/coastal whites that the difference between them is practically what social scientists would consider an ‘ethnic difference.’”
Of course, there’s much today dividing America along racial lines as well. While Americans of color still contend with the legacy of institutional intolerance, “it is simply a fact that ‘diversity’ policies at the most select American universities and in some sectors of the economy have had a disparate adverse impact on whites.” So, both blacks and whites (and most everyone else) feel threatened to some degree. This has sharpened the edge of identity politics on the left and right. In Chua’s reading, these tribal differences will not actually break the country apart. But, she believes, they could fundamentally and irreversibly change “who we are.”
Political Tribes, however, is no doomsday prediction. Despite our clannish resentments, Chua sees, in her daily interactions, people’s willingness to form bonds beyond those of their in-group and a relaxing of tribal ties. What’s needed is for haves and have-nots, whites and blacks, liberals and conservatives to enjoy more meaningful exposure to one another. This pat prescription would come across as criminally sappy if not for the genuinely loving and patriotic way in which Chua writes about our responsibilities as a “supergroup.” “It’s not enough that we view one another as fellow human beings,” she says, “we need to view one another as fellow Americans.” Americans as a higher ontological category than human beings—there’s poetry in that. And a healthy bit of tribalism, too.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Then again, you know what happens when you assume.
“Here is my prediction,” Kristof wrote. “The new paramount leader, Xi Jinping, will spearhead a resurgence of economic reform, and probably some political easing as well. Mao’s body will be hauled out of Tiananmen Square on his watch, and Liu Xiaobo, the Nobel Peace Prize–winning writer, will be released from prison.”
True, Kristof conceded, “I may be wrong entirely.” But, he went on, “my hunch on this return to China, my old home, is that change is coming.”
Five years later, the Chinese economy, while large, is saddled with debt. Analysts and government officials are worried about its real-estate bubble. Despite harsh controls, capital continues to flee China. Nor has there been “some political easing.” On the contrary, repression has worsened. The Great Firewall blocks freedom of speech and inquiry, human-rights advocates are jailed, and the provinces resemble surveillance states out of a Philip K. Dick novel. Mao rests comfortably in his mausoleum. Not only did Liu Xiaobo remain a prisoner, he was also denied medical treatment when he contracted cancer, and he died in captivity in 2017.
As for Xi Jinping, he turned out not to be a reformer but a dictator. Steadily, under the guise of anti-corruption campaigns, Xi decimated alternative centers of power within the Communist Party. He built up a cult of personality around “Xi Jinping thought” and his “Chinese dream” of economic, cultural, and military strength. His preeminence was highlighted in October 2017 when the Politburo declined to name his successor. Then, in March of this year, the Chinese abolished the term limits that have guaranteed rotation in office since the death of Mao. Xi reigns supreme.
Bizarrely, this latest development seems to have come as a surprise to the American press. The headline of Emily Rauhala’s Washington Post article read: “China proposes removal of two-term limit, potentially paving way for President Xi Jinping to stay on.” Potentially? Xi’s accession to emperor-like status, wrote Julie Bogen of Vox, “could destabilize decades of progress toward democracy and instead move China even further toward authoritarianism.” Could? Bogen did not specify which “decades of progress toward democracy” she was talking about, but that is probably because, since 1989, there haven’t been any.
Xi’s assumption of dictatorial powers should not have shocked anyone who has paid the slightest bit of attention to recent Chinese history. The Chinese government, until last month a collective dictatorship, has exercised despotic control over its people since the very founding of the state in 1949. And yet the insatiable desire among media to incorporate news events into a preestablished storyline led reporters to cover the party announcement as a sudden reversal. Why? Because only then would the latest decision of an increasingly embattled and belligerent Chinese leadership fit into the prefabricated narrative that says we are living in an authoritarian moment.
For example, one article in the February 26, 2018, New York Times was headlined, “With Xi’s Power Grab, China Joins New Era of Strongmen.” CNN’s James Griffiths wrote, “While Chinese politics is not remotely democratic in the traditional sense, there are certain checks and balances within the Party system itself, with reformers and conservatives seeing their power and influence waxing and waning over time.” Checks and balances, reformers and conservatives—why, they are just like us, only within the context of a one-party state that ruthlessly brooks no dissent.
Now, we do happen to live in an era when democracy and autocracy are at odds. But China is not joining the “authoritarian trend.” It helped create and promote the trend. Next year, China’s “era of strongmen” will enter its seventh decade. The fundamental nature of the Communist regime in Beijing has not changed during this time.
My suspicion is that journalists were taken aback by Xi’s revelation of his true nature because they, like most Western elites, have bought into the myth of China’s “peaceful rise.” For decades, Americans have been told that China’s economic development and participation in international organizations and markets would lead inevitably to its political liberalization. What James Mann calls “the China fantasy” manifested itself in the leadership of both major political parties and in the pronouncements of the chattering class across the ideological spectrum.
Indeed, not only was the soothing scenario of China as a “responsible stakeholder” on the glide path to democracy widespread, but media figures also admonished Americans for not living up to Chinese standards. “One-party autocracy certainly has its drawbacks,” Tom Friedman conceded in an infamous 2009 column. “But when it is led by a reasonably enlightened group of people, as China is today, it can also have great advantages.” For instance, Friedman went on, “it is not an accident that China is committed to overtaking us in electric cars, solar power, energy efficiency, batteries, nuclear power, and wind power.” The following year, during an episode of Meet the Press, Friedman admitted, “I have fantasized—don’t get me wrong—but what if we could just be China for a day?” Just think of all the electric cars the government could force us to buy.
This attitude toward Chinese Communism as a public-policy exemplar became still more pronounced after Donald Trump was elected president on an “America First” agenda. China’s theft of intellectual property, industrial espionage, harassment and exploitation of Western companies, currency manipulation, mercantilist subsidies and tariffs, chronic pollution, military buildup, and interference in democratic politics and university life did not prevent it from proclaiming itself the defender of globalization and environmentalism.
When Xi visited the Davos World Economic Forum last year, the Economist noted the “fawning reception” that greeted him. The speech he delivered, pledging to uphold the international order that had facilitated his nation’s rise as well as his own, received excellent reviews. On January 15, 2017, Fareed Zakaria said, “In an America-first world, China is filling the vacuum.” A few days later, Charlie Rose told his CBS audience, “It’s almost like China is saying, ‘we are the champions of globalization, not the United States.’” And on January 30, 2017, the New York Times quoted a “Berlin-based private equity fund manager,” who said, “We heard a Chinese president becoming leader of the free world.”
The chorus of praise for China grew louder last spring when Trump announced American withdrawal from an international climate accord. In April 2017, Rick Stengel said on cable television that China is becoming “the global leader on the environment.” On June 8, a CBS reporter said that Xi is “now viewed as the world’s leader on climate change.” On June 19, 2017, on Bloomberg news, Dana Hull said, “China is the leader on climate change, especially when it comes to autos.” Also that month, one NBC anchor asked Senator Mike Lee of Utah, “Are you concerned at all that China may be seen as sort of the global leader when it comes to bringing countries together, more so than the United States?”
Last I checked, Xi Jinping’s China has not excelled at “bringing countries together,” unless—like Australia, Japan, South Korea, and Vietnam—those countries are allying with the United States to balance against China. What instead should concern Senator Lee, and all of us, is an American media filled with people suckered by foreign propaganda that happens to coincide with their political preferences, and who are unable to make elementary distinctions between tyrannical governments and consensual ones.
Choose your plan and pay nothing for six Weeks!
Marx didn’t supplant old ideas about money and commerce; he intensified them
rom the time of antiquity until the Enlightenment, trade and the pursuit of wealth were considered sinful. “In the city that is most finely governed,” Aristotle wrote, “the citizens should not live a vulgar or a merchant’s way of life, for this sort of way of life is ignoble and contrary to virtue.”1 In Plato’s vision of an ideal society (the Republic) the ruling “guardians” would own no property to avoid tearing “the city in pieces by differing about ‘mine’ and ‘not mine.’” He added that “all that relates to retail trade, and merchandise, and the keeping of taverns, is denounced and numbered among dishonourable things.” Only noncitizens would be allowed to indulge in commerce. A citizen who defies the natural order and becomes a merchant should be thrown in jail for “shaming his family.”
At his website humanprogress.org, Marian L. Tupy quotes D.C. Earl of the University of Leeds, who wrote that in Ancient Rome, “all trade was stigmatized as undignified … the word mercator [merchant] appears as almost a term of abuse.” Cicero noted in the first century b.c.e. that retail commerce is sordidus (vile) because merchants “would not make any profit unless they lied constantly.”
Early Christianity expanded this point of view. Jesus himself was clearly hostile to the pursuit of riches. “For where your treasure is,” he proclaimed in his Sermon on the Mount, “there will your heart be also.” And of course he insisted that “it is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God.”
The Catholic Church incorporated this view into its teachings for centuries, holding that economics was zero-sum. “The Fathers of the Church adhered to the classical assumption that since the material wealth of humanity was more or less fixed, the gain of some could only come at a loss to others,” the economic historian Jerry Muller explains in his book The Mind and the Market: Capitalism in Western Thought. As St. Augustine put it, “Si unus non perdit, alter non acquirit”—“If one does not lose, the other does not gain.”
The most evil form of wealth accumulation was the use of money to make money—usury. Lending money at interest was unnatural, in this view, and therefore invidious. “While expertise in exchange is justly blamed since it is not according to nature but involves taking from others,” Aristotle insisted, “usury is most reasonably hated because one’s possessions derive from money itself and not from that for which it was supplied.” In the Christian tradition, the only noble labor was physical labor, and so earning wealth from the manipulation of money was seen as inherently ignoble.
In the somewhat more prosperous and market-driven medieval period, Thomas Aquinas helped make private property and commerce more acceptable, but he did not fundamentally break with the Aristotelian view that trade was suspect and the pursuit of wealth was sinful. The merchant’s life was in conflict with the teachings of Christianity if it led to pride or avarice. “Echoing Aristotle,” Muller writes, “Aquinas reasserted that justice in the distribution of material goods was fulfilled when someone received in proportion to his status, office, and function within the institutions of an existing, structured community. Hence Aquinas decried as covetousness the accumulation of wealth to improve one’s place in the social order.”
In the medieval mind, Jews were seen as a kind of stand-in for mercantile and usurious sinfulness. Living outside the Christian community, but within the borders of Christendom, they were allowed to commit the sin of usury on the grounds that their souls were already forfeit. Pope Nicholas V insisted that it is much better that “this people should perpetrate usury than that Christians should engage in it with one another.”2 The Jews were used as a commercial caste the way the untouchables of India were used as a sanitation caste. As Montesquieu would later observe in the 16th century, “whenever one prohibits a thing that is naturally permitted or necessary, the people who engage in it are regarded as dishonest.” Thus, as Muller has argued, anti-Semitism has its roots in a kind of primitive anti-capitalism.
Early Protestantism did not reject these views. It amplified them.3 Martin Luther despised commerce. “There is on earth no greater enemy of man, after the Devil, than a gripe-money and usurer, for he wants to be God over all men…. Usury is a great, huge monster, like a werewolf …. And since we break on the wheel and behead highwaymen, murderers, and housebreakers, how much more ought we to break on the wheel and kill … hunt down, curse, and behead all usurers!”4
It should therefore come as no surprise that Luther’s views of Jews, the living manifestation of usury in the medieval mind, were just as immodest. In his 1543 treatise On the Jews and Their Lies, he offers a seven-point plan on how to deal with them:
- “First, to set fire to their synagogues or schools .…This is to be done in honor of our Lord and of Christendom, so that God might see that we are Christians …”
- “Second, I advise that their houses also be razed and destroyed.”
- “Third, I advise that all their prayer books and Talmudic writings, in which such idolatry, lies, cursing, and blasphemy are taught, be taken from them.”
- “Fourth, I advise that their rabbis be forbidden to teach henceforth on pain of loss of life and limb… ”
- “Fifth, I advise that safe-conduct on the highways be abolished completely for the Jews. For they have no business in the countryside … ”
- “Sixth, I advise that usury be prohibited to them, and that all cash and treasure of silver and gold be taken from them … ”
- “Seventh, I recommend putting a flail, an ax, a hoe, a spade, a distaff, or a spindle into the hands of young, strong Jews and Jewesses and letting them earn their bread in the sweat of their brow.… But if we are afraid that they might harm us or our wives, children, servants, cattle, etc., … then let us emulate the common sense of other nations such as France, Spain, Bohemia, etc., … then eject them forever from the country … ”
Luther agitated against the Jews throughout Europe, condemning local officials for insufficient anti-Semitism (a word that did not exist at the time and a sentiment that was not necessarily linked to more modern biological racism). His demonization of the Jews was derived from more than anti-capitalism. But his belief that the Jewish spirit of commerce was corrupting of Christianity was nonetheless central to his indictment. He sermonized again and again that it must be cleansed from Christendom, either through conversion, annihilation, or expulsion.
Three centuries later, Karl Marx would blend these ideas together in a noxious stew.
The idea at the center of virtually all of Marx’s economic writing is the labor theory of value. It holds that all of the value of any product can be determined by the number of hours it took for a laborer or laborers to produce it. From the viewpoint of conventional economics—and elementary logic—this is ludicrous. For example, ingenuity, which may not be time-consuming, is nonetheless a major source of value. Surely it cannot be true that someone who works intelligently, and therefore efficiently, provides less value than someone who works stupidly and slowly. (Marx anticipates some of these kinds of critiques with a lot of verbiage about the costs of training and skills.) But the more relevant point is simply this: The determinant of value in an economic sense is not the labor that went into a product but the price the consumer is willing to pay for it. Whether it took an hour or a week to build a mousetrap, the value of the two products is the same to the consumer if the quality is the same.
Marx had philosophical, metaphysical, and tactical reasons for holding fast to the labor theory of value. It was essential to his argument that capitalism—or what we would now call “commerce” plain and simple—was exploitative by its very nature. In Marx, the term “exploitation” takes a number of forms. It is not merely evocative of child laborers working in horrid conditions; it covers virtually all profits. If all value is captured by labor, any “surplus value” collected by the owners of capital is by definition exploitative. The businessman who risks his own money to build and staff an innovative factory is not adding value; rather, he is subtracting value from the workers. Indeed, the money he used to buy the land and the materials is really just “dead labor.” For Marx, there was an essentially fixed amount of “labor-power” in society, and extracting profit from it was akin to strip-mining a natural resource. Slavery and wage-labor were different forms of the same exploitation because both involved extracting the common resource. In fact, while Marx despised slavery, he thought wage-labor was only a tiny improvement because wage-labor reduced costs for capitalists in that they were not required to feed or clothe wage laborers.
Because Marx preached revolution, we are inclined to consider him a revolutionary. He was not. None of this was a radical step forward in economic or political thinking. It was, rather, a reaffirmation of the disdain of commerce that starts with Plato and Aristotle and found new footing in Christianity. As Jerry Muller (to whom I am obviously very indebted) writes:
To a degree rarely appreciated, [Marx] merely recast the traditional Christian stigmatization of moneymaking into a new vocabulary and reiterated the ancient suspicion against those who used money to make money. In his concept of capitalism as “exploitation” Marx returned to the very old idea that money is fundamentally unproductive, that only those who live by the sweat of their brow truly produce, and that therefore not only interest, but profit itself, is always ill-gotten.
In his book Karl Marx: A Nineteenth-Century Life, Jonathan Sperber suggests that “Marx is more usefully understood as a backward-looking figure, who took the circumstances of the first half of the nineteenth century and projected them into the future, than as a surefooted and foresighted interpreter of historical trends.”5
Marx was a classic bohemian who resented the fact that he spent his whole life living off the generosity of, first, his parents and then his collaborator Friedrich Engels. He loathed the way “the system” required selling out to the demands of the market and a career. The frustrated poet turned to the embryonic language of social science to express his angry barbaric yawp at The Man. “His critique of the stultifying effects of labor in a capitalist society,” Muller writes, “is a direct continuation of the Romantic conception of the self and its place in society.”
In other words, Marx was a romantic, not a scientist. Romanticism emerged as a rebellion against the Enlightenment, taking many forms—from romantic poetry to romantic nationalism. But central to all its forms was the belief that modern, commercial, rational life is inauthentic and alienating, and cuts us off from our true natures.
As Rousseau, widely seen as the first romantic, explained in his Discourse on the Moral Effects of the Arts and Sciences, modernity—specifically the culture of commerce and science—was oppressive. The baubles of the Enlightenment were mere “garlands of flowers” that concealed “the chains which weigh [men] down” and led people to “love their own slavery.”
This is a better context for understanding Marx’s and Engels’s hatred of the division of labor and the division of rights and duties. Their baseline assumption, like Rousseau’s, is that primitive man lived a freer and more authentic life before the rise of private property and capitalism. “Within the tribe there is as yet no difference between rights and duties,” Engels writes in Origins of the Family, Private Property, and the State. “The question whether participation in public affairs, in blood revenge or atonement, is a right or a duty, does not exist for the Indian; it would seem to him just as absurd as the question whether it was a right or a duty to sleep, eat, or hunt. A division of the tribe or of the gens into different classes was equally impossible.”
For Marx, then, the Jew might as well be the real culprit who told Eve to bite the apple. For the triumph of the Jew and the triumph of money led to the alienation of man. And in truth, the term “alienation” is little more than modern-sounding shorthand for exile from Eden. The division of labor encourages individuality, alienates us from the collective, fosters specialization and egoism, and dethrones the sanctity of the tribe. “Money is the jealous god of Israel, in face of which no other god may exist,” Marx writes. “Money degrades all the gods of man—and turns them into commodities. Money is the universal self-established value of all things. It has, therefore, robbed the whole world—both the world of men and nature—of its specific value. Money is the estranged essence of man’s work and man’s existence, and this alien essence dominates him, and he worships it.”
Marx’s muse was not analytical reason, but resentment. That is what fueled his false consciousness. To understand this fully, we should look at how that most ancient and eternal resentment—Jew-hatred—informed his worldview.
The atheist son of a Jewish convert to Lutheranism and the grandson of a rabbi, Karl Marx hated capitalism in no small part because he hated Jews. According to Marx and Engels, Jewish values placed the acquisition of money above everything else. Marx writes in his infamous essay “On the Jewish Question”:
Let us consider the actual, worldly Jew—not the Sabbath Jew … but the everyday Jew.
Let us not look for the secret of the Jew in his religion, but let us look for the secret of his religion in the real Jew.
What is the secular basis of Judaism? Practical need, self-interest. What is the worldly religion of the Jew? Huckstering. What is his worldly God? Money [Emphasis in original]
The spread of capitalism, therefore, represented a kind of conquest for Jewish values. The Jew—at least the one who set up shop in Marx’s head—makes his money from money. He adds no value. Worse, the Jews considered themselves to be outside the organic social order, Marx complained, but then again that is what capitalism encourages—individual independence from the body politic and the selfish (in Marx’s mind) pursuit of individual success or happiness. For Marx, individualism was a kind of heresy because it meant violating the sacred bond of the community. Private property empowered individuals to live as individuals “without regard to other men,” as Marx put it.
This is the essence of Marx’s view of alienation. Marx believed that people were free, creative beings but were chained to their role as laborers in the industrial machine. The division of labor inherent to capitalist society was alienating and inauthentic, pulling us out of the communitarian natural General Will. The Jew was both an emblem of this alienation and a primary author of it:
The Jew has emancipated himself in a Jewish manner, not only because he has acquired financial power, but also because, through him and also apart from him, money has become a world power and the practical Jewish spirit has become the practical spirit of the Christian nations. The Jews have emancipated themselves insofar as the Christians have become Jews. [Emphasis in original]
He adds, “The god of the Jews has become secularized and has become the god of the world. The bill of exchange is the real god of the Jew. His god is only an illusory bill of exchange.” And he concludes: “In the final analysis, the emancipation of the Jews is the emancipation of mankind from Judaism.” [Emphasis in original]
In The Holy Family, written with Engels, he argues that the most pressing imperative is to transcend “the Jewishness of bourgeois society, the inhumanity of present existence, which finds its highest embodiment in the system of money.” [Emphasis in original]
In his “Theories of Surplus Value,” he praises Luther’s indictment of usury. Luther “has really caught the character of old-fashioned usury, and that of capital as a whole.” Marx and Engels insist that the capitalist ruling classes, whether or not they claim to be Jewish, are nonetheless Jewish in spirit. “In their description of the confrontation of capital and labor, Marx and Engels resurrected the traditional critique of usury,” Muller observes. Or, as Deirdre McCloskey notes, “the history that Marx thought he perceived went with his erroneous logic that capitalism—drawing on an anticommercial theme as old as commerce—just is the same thing as greed.”6 Paul Johnson is pithier: Marx’s “explanation of what was wrong with the world was a combination of student-café anti-Semitism and Rousseau.”7
For Marx, capital and the Jew are different faces of the same monster: “The capitalist knows that all commodities—however shabby they may look or bad they may smell—are in faith and in fact money, internally circumcised Jews, and in addition magical means by which to make more money out of money.”
Marx’s writing, particularly on surplus value, is drenched with references to capital as parasitic and vampiric: “Capital is dead labor which, vampire-like, lives only by sucking living labor, and lives the more, the more labor it sucks. The time during which the worker works is the time during which the capitalist consumes the labor-power he has bought from him.” The constant allusions to the eternal wickedness of the Jew combined with his constant references to blood make it hard to avoid concluding that Marx had simply updated the blood libel and applied it to his own atheistic doctrine. His writing is replete with references to the “bloodsucking” nature of capitalism. He likens both Jews and capitalists (the same thing in his mind) to life-draining exploiters of the proletariat.
Marx writes how the extension of the workday into the night “only slightly quenches the vampire thirst for the living blood of labor,” resulting in the fact that “the vampire will not let go ‘while there remains a single muscle, sinew or drop of blood to be exploited.’” As Mark Neocleous of Brunel University documents in his brilliant essay, “The Political Economy of the Dead: Marx’s Vampires,” the images of blood and bloodsucking capital in Das Kapital are even more prominent motifs: “Capital ‘sucks up the worker’s value-creating power’ and is dripping with blood. Lacemaking institutions exploiting children are described as ‘blood-sucking,’ while U.S. capital is said to be financed by the ‘capitalized blood of children.’ The appropriation of labor is described as the ‘life-blood of capitalism,’ while the state is said to have here and there interposed itself ‘as a barrier to the transformation of children’s blood into capital.’”
Marx’s vision of exploitative, Jewish, bloodsucking capital was an expression of romantic superstition and tribal hatred. Borrowing from the medieval tradition of both Catholics as well as Luther himself, not to mention a certain folkloric poetic tradition, Marx invented a modern-sounding “scientific” theory that was in fact reactionary in every sense of the word. “If Marx’s vision was forward-looking, its premises were curiously archaic,” Muller writes. “As in the civic republican and Christian traditions, self-interest is the enemy of social cohesion and of morality. In that sense, Marx’s thought is a reversion to the time before Hegel, Smith, or Voltaire.”
In fairness to Marx, he does not claim that he wants to return to a feudal society marked by inherited social status and aristocracy. He is more reactionary than that. The Marxist final fantasy holds that at the end of history, when the state “withers away,” man is liberated from all exploitation and returns to the tribal state in which there is no division of labor, no dichotomy of rights and duties.
Marx’s “social science” was swept into history’s dustbin long ago. What endured was the romantic appeal of Marxism, because that appeal speaks to our tribal minds in ways we struggle to recognize, even though it never stops whispering in our ears.
It is an old conservative habit—one I’ve been guilty of myself—of looking around society and politics, finding things we don’t like or disagree with, and then running through an old trunk of Marxist bric-a-brac to spruce up our objections. It is undeniably true that the influence of Marx, particularly in the academy, remains staggering. Moreover, his indirect influence is as hard to measure as it is extensive. How many novels, plays, and movies have been shaped by Marx or informed by people shaped by Marx? It’s unknowable.
And yet, this is overdone. The truth is that Marx’s ideas were sticky for several reasons. First, they conformed to older, traditional ways of seeing the world—far more than Marxist zealots have ever realized. The idea that there are malevolent forces above and around us, manipulating our lives and exploiting the fruits of our labors, was hardly invented by him. In a sense, it wasn’t invented by anybody. Conspiracy theories are as old as mankind, stretching back to prehistory.
There’s ample reason—with ample research to back it up—to believe that there is a natural and universal human appetite for conspiracy theories. It is a by-product of our adapted ability to detect patterns, particularly patterns that may help us anticipate a threat—and, as Mark van Vugt has written, “the biggest threat facing humans throughout history has been other people, particularly when they teamed up against you.”8
To a very large extent, this is what Marxism is —an extravagant conspiracy theory in which the ruling classes, the industrialists, and/or the Jews arrange affairs for their own benefit and against the interests of the masses. Marx himself was an avid conspiracy theorist, as so many brilliant bohemian misfits tend to be, believing that the English deliberately orchestrated the Irish potato famine to “carry out the agricultural revolution and to thin the population of Ireland down to the proportion satisfactory to the landlords.” He even argued that the Crimean War was a kind of false-flag operation to hide the true nature of Russian-English collusion.
Contemporary political figures on the left and the right routinely employ the language of exploitation and conspiracy. They do so not because they’ve internalized Marx, but because of their own internal psychological architecture. In Rolling Stone, Matt Taibbi, the talented left-wing writer, describes Goldman Sachs (the subject of quite a few conspiracy theories) thus:
The first thing you need to know about Goldman Sachs is that it’s everywhere. The world’s most powerful investment bank is a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money. In fact, the history of the recent financial crisis, which doubles as a history of the rapid decline and fall of the suddenly swindled dry American empire, reads like a Who’s Who of Goldman Sachs graduates.
Marx would be jealous that he didn’t think of the phrase “the great vampire squid.”
Meanwhile, Donald Trump has occasionally traded in the same kind of language, even evoking some ancient anti-Semitic tropes. “Hillary Clinton meets in secret with international banks to plot the destruction of U.S. sovereignty in order to enrich these global financial powers, her special-interest friends, and her donors,” Trump said in one campaign speech. “This election will determine if we are a free nation or whether we have only the illusion of democracy, but are in fact controlled by a small handful of global special interests rigging the system, and our system is rigged.” He added: “Our corrupt political establishment, that is the greatest power behind the efforts at radical globalization and the disenfranchisement of working people. Their financial resources are virtually unlimited, their political resources are unlimited, their media resources are unmatched.”
A second reason Marxism is so successful at fixing itself to the human mind is that it offers—to some—a palatable substitute for the lost certainty of religious faith. Marxism helped to restore certainty and meaning for huge numbers of people who, having lost traditional religion, had not lost their religious instinct. One can see evidence of this in the rhetoric used by Marxist and other socialist revolutionaries who promised to deliver a “Kingdom of Heaven on Earth.”
The 20th-century philosopher Eric Voegelin argued that Enlightenment thinkers like Voltaire had stripped the transcendent from its central place in human affairs. God had been dethroned and “We the People”—and our things—had taken His place. “When God is invisible behind the world,” Voegelin writes, “the contents of the world will become new gods; when the symbols of transcendent religiosity are banned, new symbols develop from the inner-worldly language of science to take their place.”9
The religious views of the Romantic writers and artists Marx was raised on (and whom he had once hoped to emulate) ran the gamut from atheism to heartfelt devotion, but they shared an anger and frustration with the way the new order had banished the richness of faith from the land. “Now we have got the freedom of believing in public nothing but what can be rationally demonstrated,” the writer Johann Heinrich Merck complained. “They have deprived religion of all its sensuous elements, that is, of all its relish. They have carved it up into its parts and reduced it to a skeleton without color and light…. And now it’s put in a jar and nobody wants to taste it.”10
When God became sidelined as the source of ultimate meaning, “the people” became both the new deity and the new messianic force of the new order. In other words, instead of worshipping some unseen force residing in Heaven, people started worshipping themselves. This is what gave nationalism its spiritual power, as the volksgeist, people’s spirit, replaced the Holy Spirit. The tribal instinct to belong to a sacralized group took over. In this light, we can see how romantic nationalism and “globalist” Marxism are closely related. They are both “re-enchantment creeds,” as the philosopher-historian Ernest Gellner put it. They fill up the holes in our souls and give us a sense of belonging and meaning.
For Marx, the inevitable victory of Communism would arrive when the people, collectively, seized their rightful place on the Throne of History.11 The cult of unity found a new home in countless ideologies, each of which determined, in accord with their own dogma, to, in Voegelin’s words, “build the corpus mysticum of the collectivity and bind the members to form the oneness of the body.” Or, to borrow a phrase from Barack Obama, “we are the ones we’ve been waiting for.”
In practice, Marxist doctrine is more alienating and dehumanizing than capitalism will ever be. But in theory, it conforms to the way our minds wish to see the world. There’s a reason why so many populist movements have been so easily herded into Marxism. It’s not that the mobs in Venezuela or Cuba started reading The Eighteenth Brumaire and suddenly became Marxists. The peasants of North Vietnam did not need to read the Critique of the Gotha Program to become convinced that they were being exploited. The angry populace is always already convinced. The people have usually reached the conclusion long ago. They have the faith; what they need is the dogma. They need experts and authority figures—priests!—with ready-made theories about why the masses’ gut feelings were right all along. They don’t need Marx or anybody else to tell them they feel ripped off, disrespected, exploited. They know that already. The story Marxists tell doesn’t have to be true. It has to be affirming. And it has to have a villain. The villain, then and now, is the Jew.
1 Muller, Jerry Z.. The Mind and the Market: Capitalism in Western Thought (p. 5). Knopf Doubleday Publishing Group. Kindle Edition.
2 Muller, Jerry Z. Capitalism and the Jews (pp. 23-24). Princeton University Press. Kindle Edition.
3 Luther’s economic thought, reflected in his “Long Sermon on Usury of 1520” and his tract On Trade and Usury of 1524, was hostile to commerce in general and to international trade in particular, and stricter than the canonists in its condemnation of moneylending. Muller, Jerry Z.. Capitalism and the Jews (p. 26). Princeton University Press. Kindle Edition.
4 Quoted approvingly in Marx, Karl and Engels, Friedrich. “Capitalist Production.” Capital: Critical Analysis of Production, Volume II. Samuel Moore and Edward Aveling, trans. London: Swan Sonnenschein, Lowrey, & Co. 1887. p. 604
5 Sperber, Jonathan. “Introduction.” Karl Marx: A Nineteenth-Century Life. New York: Liverwright Publishing Corporation. 2013. xiii.
6 McCloskey, Deirdre. Bourgeois Dignity: Why Economics Can’t Explain the Modern World. Chicago: University of Chicago Press. p. 142
7 Johnson, Paul. Intellectuals (Kindle Locations 1325-1326). HarperCollins. Kindle Edition.
8 See also: Sunstain, Cass R. and Vermeule, Adrian. “Syposium on Conspiracy Theories: Causes and Cures.” The Journal of Political Philosophy: Volume 17, Number 2, 2009, pp. 202-227. http://www.ask-force.org/web/Discourse/Sunstein-Conspiracy-Theories-2009.pdf
9 Think of the story of the Golden Calf. Moses departs for Mt. Sinai to talk with God and receive the Ten Commandments. No sooner had he left did the Israelites switch their allegiance to false idol, the Golden Calf, treating a worldly inanimate object as their deity. So it is with modern man. Hence, Voegelin’s quip that for the Marxist “Christ the Redeemer is replaced by the steam engine as the promise of the realm to come.”
10 Blanning, Tim. The Romantic Revolution: A History (Modern Library Chronicles Series Book 34) (Kindle Locations 445-450). Random House Publishing Group. Kindle Edition.
11 Marx: “Along with the constant decrease in the number of capitalist magnates, who usurp and monopolize all the advantages of this process of transformation, the mass of misery, oppression, slavery, degradation and exploitation grows; but with this there also grows the revolt of the working class, a class constantly increasing in numbers, and trained, united and organized by the very mechanism of the capitalist process of production.”
Choose your plan and pay nothing for six Weeks!
Review of 'Realism and Democracy' By Elliott Abrams
Then, in 1966, Syrian Baathists—believers in a different transnational unite-all-the-Arabs ideology—overthrew the government in Damascus and lent their support to Palestinian guerrillas in the Jordanian-controlled West Bank to attack Israel. Later that year, a Jordanian-linked counter-coup in Syria failed, and the key figures behind it fled to Jordan. Then, on the eve of the Six-Day War in May 1967, Jordan’s King Hussein signed a mutual-defense pact with Egypt, agreeing to deploy Iraqi troops on Jordanian soil and effectively giving Nasser command and control over Jordan’s own armed forces.
This is just a snapshot of the havoc wreaked on the Middle East by the conceit of pan-Arabism. This history is worth recalling when reading Elliott Abrams’s idealistic yet clearheaded Realism and Democracy: American Foreign Policy After the Arab Spring. One of the book’s key insights is the importance of legitimacy for regimes that rule “not nation-states” but rather “Sykes-Picot states”—the colonial heirlooms of Britain and France created in the wake of the two world wars. At times, these states barely seem to acknowledge, let alone respect, their own sovereignty.
When the spirit of revolution hit the Arab world in 2010, the states with external legitimacy—monarchies such as Saudi Arabia, Jordan, Morocco, Kuwait—survived. Regimes that ruled merely by brute force—Egypt, Yemen, Libya—didn’t. The Bashar al-Assad regime in Syria has only held on thanks to the intervention of Iran and Russia, and it is difficult to argue that there is any such thing as “Syria” anymore. What this all proved was that the “stability” of Arab dictatorships, a central conceit of U.S. foreign policy, was in many cases an illusion.
That is the first hard lesson in pan-Arabism from Abrams, now a senior fellow at the Council on Foreign Relations. The second is this: The extremists who filled the power vacuums in Egypt, Libya, Syria, and other countries led Western analysts to believe that there was an “Islamic exceptionalism” at play that demonstrated Islam’s incompatibility with democracy. Abrams effectively debunks this by showing that the real culprit stymieing the spread of liberty in the Middle East was not Islam but pan-Arabism, which stems from secular roots. He notes one study showing that, in the 30 years between 1973 and 2003, “a non-Arab Muslim-majority country was almost 20 times more likely to be ‘electorally competitive’ than an Arab-majority Muslim country.”
Abrams is thus an optimist on the subject of Islam and democracy—which is heartening, considering his experience and expertise. He worked for legendary cold-warrior Senator Henry “Scoop” Jackson and served as an assistant secretary of state for human rights under Ronald Reagan and later as George W. Bush’s deputy national-security adviser for global democracy strategy. Realism and Democracy is about U.S. policy and the Arab world—but it is also about the nature of participatory politics itself. Its theme is: Ideas have consequences. And what sets Abrams’s book apart is its concrete policy recommendations to put flesh on the bones of those ideas, and bring them to life.
The dreary disintegration of the Arab Spring saw Hosni Mubarak’s regime in Egypt replaced by the Muslim Brotherhood, which after a year was displaced in a military coup. Syria’s civil war has seen about 400,000 killed and millions displaced. Into the vacuum stepped numerous Islamist terror groups. The fall of Muammar Qaddafi in Libya has resulted in total state collapse. Yemen’s civil war bleeds on.
Stability in authoritarian states with little or no legitimacy is a fiction. Communist police states were likely to fall, and the longer they took to do so, the longer the opposition sat in a balled-up rage. That, Abrams notes, is precisely what happened in Egypt. Mubarak’s repression gave the Muslim Brotherhood an advantage once the playing field opened up: The group had decades of organizing under its belt, a coherent raison d’être, and a track record of providing health and education services where the state lagged. No other parties or opposition groups had anything resembling this kind of coordination.
Abrams trenchantly concludes from this that “tyranny in the Arab world is dangerous and should itself be viewed as a form of political extremism that is likely to feed other forms.” Yet even this extremism can be tempered by power, he suggests. In a democracy, Islamist parties will have to compromise and moderate or be voted out. In Tunisia, electorally successful Islamists chose the former, and it stands as a rare success story.
Mohamed Morsi’s Muslim Brotherhood took a different path in Egypt, with parlous results. Its government began pulling up the ladder behind it, closing avenues of political resistance and civic participation. Hamas did the same after winning Palestinian elections in 2006. Abrams thinks that the odds of such a bait-and-switch can be reduced. He quotes the academic Stephen R. Grand, who calls for all political parties “to take an oath of allegiance to the state, to respect the outcome of democratic elections, to abide by the rules of the constitution, and to forswear violence.” If they keep their word, they will open up the political space for non-Islamist parties to get in the game. If they don’t—well, let the Egyptian coup stand as a warning.
Abrams, to his credit, does not avoid the Mesopotamian elephant in the room. The Iraq War has become Exhibit A in the dangers of democracy promotion. This is understandable, but it is misguided. The Bush administration made the decision to decapitate the regime of Saddam Hussein based on national-security calculations, mainly the fear of weapons of mass destruction. Once the decapitation had occurred, the administration could hardly have been expected to replace Saddam with another strongman whose depravities would this time be on America’s conscience. Critics of the war reverse the order here and paint a false portrait.
Here is where Abrams’s book stands out: He provides, in the last two chapters, an accounting of the weaknesses in U.S. policy, including mistakes made by the administration he served, and a series of concrete proposals to show that democracy promotion can be effective without the use of force.
One mistake, according to Abrams, is America’s favoring of civil-society groups over political parties. These groups do much good, generally have strong English-language skills, and are less likely to be tied to the government or ancien régime. But those are also strikes against them. Abrams relates a story told by former U.S. diplomat Princeton Lyman about Nelson Mandela. Nigerian activists asked the South African freedom fighter to support an oil embargo against their own government. Mandela declined because, Lyman says, there was as yet no serious, organized political opposition party: “What Mandela was saying to the Nigerian activists is that, in the absence of political movements dedicated not just to democracy but also to governing when the opportunity arises, social, civic, and economic pressures against tyranny will not suffice.” Without properly focused democracy promotion, other tools to punish repressive regimes will be off the table.
Egypt offers a good example of another principle: Backsliding must be punished. The Bush administration’s pressure on Mubarak over his treatment of opposition figures changed regime behavior in 2005. Yet by the end of Bush’s second term, the pressure had let up and Mubarak’s misbehavior continued, with no consequences from either Bush or his successor, Barack Obama, until it was too late.
That, in turn, leads to another of Abrams’s recommendations: “American diplomacy can be effective only when it is clear that the president and secretary of state are behind whatever diplomatic moves or statements an official in Washington or a U.S. ambassador is making.” This is good advice for the current Oval Office occupant and his advisers. President Trump’s supporters advise critics of his dismissive attitude toward human-rights violations to focus on what the president does, not what he says. But Trump’s refusal to take a hard line against Vladimir Putin and his recent praise of Chinese President Xi Jinping’s move to become president for life undermine lower-level officials’ attempts to encourage reform.
There won’t be democracy without democrats. Pro-democracy education, Abrams advises, can teach freedom-seekers to speak the ennobling language of liberty, which is the crucial first step toward building a culture that prizes it. And in the process, we might do some ennobling ourselves.