Why should anyone today want to bother with such a relic of the past as "the theory of social-fascism"? One…
Why should anyone today want to bother with such a relic of the past as “the theory of social-fascism”? One reason is that it once bothered us so much; another is that it may be bothering us again.
Historically, the so-called theory of social-fascism and the practice based on it constituted one of the chief factors contributing to the victory of German fascism in January 1933. Yet this theory has not been given any careful study, and the existing material deals most inadequately with what is still a terribly painful and appalling subject. I hope in what follows to fill out some part of the story, if only in outline, and thus to make it more intelligible both to those old enough to have lived through that dark time and to those young enough to have heard of it without quite knowing what it was all about.
But I would be less than candid if I did not confess that I was moved to look back at social-fascism because it is no longer of merely historical interest. In its original incarnation, it helped to bring about such a vast and shattering catastrophe that it once seemed such ideas could never again be revived on a large and dangerous scale. Yet this is exactly what has been happening. The term itself has not come back into general use, but the thinking behind it again has its devotees.
A new revolutionary generation has raised questions that are not altogether new. Who is the “main enemy”? Are “reformists” more dangerous than “reactionaries”? Is liberal democracy nothing but a “mask” for bourgeois dictatorship or even some form of totalitarianism? Is it necessary to provoke violent confrontations in order to unmask this type of liberalism? If a revolutionary minority strives to destroy a democratic, even a “bourgeois-democratic,” order, is it necessarily going to be the main beneficiary—or even avoid the fate of the democratic order it has helped to pull down?
Answers to such questions made the difference between life and death for millions of people a few decades ago. In what follows, I have tried to restudy and reconstruct the earlier experience as a historical phenomenon that deserves to be better known for its own sake and that presents us with some large and difficult problems of special interest today.
Most students of Communist history associate the theory of social-fascism with Stalin and Stalinism.1 There is good reason for this, but the theory itself had deeper roots.
The first seeds of the theory of social-fascism were sown as far back as 1922-24—and not by Stalin. The term itself was reminiscent of other uncomplimentary terms—“social-patriots,” “social-chauvinists,” “social-imperialists,” and “social-traitors”—used by Lenin during the First World War to denote those Social-Democrats who wished to fight for the defense, rather than the defeat, of their own countries. These older terms provided a precedent for an analogous use of the word “social” in connection with the postwar phenomenon of “fascism.”
The problem of fascism first arose in reference to Italy. In October 1922, Benito Mussolini staged his “March on Rome” and formed his first government. Mussolini's success brought the subject of fascism sharply to the attention of the Communist International, which had previously given it little consideration. An Italian Commission was set up at the Comintern's Fourth Congress in November—December 1922, and its resolution referred to the fascists as “the most radical wing” of the bourgeoisie. But the old Italian Socialist party was blamed most for Mussolini's victory. “The real forerunner of fascism was reformism,” the resolution declared. “The treachery of the reformists is primarily responsible for the great sufferings of the Italian proletariat.”2
Though this inquest on the Italian debacle tried to blame fascism on “reformism,” it was still quite far from the peculiar amalgam that made up social-fascism. But something of the sort must have been in the air in Communist circles throughout the world because an American Communist came much closer to the idea of social-fascism the following year. In 1923, Earl Browder, then a Communist trade-union leader in Chicago, wrote an introduction to a pamphlet by Andrés Nin, an early Spanish Communist leader. In it Browder hazarded the opinion that, as in Italy, where Mussolini had formerly belonged to the Italian Socialist party, “so we may expect the real fascist leadership in America to spring from the Gompers bureaucracy [in the American Federation of Labor].”3 This “affinity between the AF of L bureaucracy and the fascist idea,” as Browder put it, was much closer to the idea of social-fascism, but Browder did not try to generalize.
It did not take long, however, for the generalization to make its appearance. The central idea now arose in connection with events in Germany.
In October 1923, the German Communists suffered two disastrous setbacks. The German army took over the states of Saxony and Thuringia in which Communists had entered the local governments. A few days later a short-lived Communist uprising was put down in Hamburg. The German Communist leadership, in November 1923, accused General Hans von Seeckt, the German army commander, of establishing a military dictatorship which represented the victory of fascism over the republic.4
But during the post-mortem held in Moscow in January 1924, the Comintern's first chieftain, Grigori Zinoviev, was not satisfied with this interpretation. It implied that “fascism” in the person of General von Seeckt had also defeated the German Social-Democrats, who had been most instrumental in founding the German Republic at Weimar in February 1919. For Zinoviev, the Social-Democrats, four of whom served in the government then headed by Gustav Stresemann, were among the “fascist” victors. In this view, German fascism was represented by Seeckt and Stresemann, not by Adolf Hitler, whose first bid for power, the “beer hall Putsch” in Munich, was also put down by Seeckt and Stresemann in the same month of November 1923. With France occupying the Ruhr at the same time, the Stresemann government was beset by so many enemies from Left and Right and abroad that its desperate efforts to survive did not readily lend themselves to an ideological interpretation of such far-reaching significance. Nevertheless, Zinoviev chose this occasion to present Social-Democracy in a new historical role, not merely in Germany but internationally.
If Seeckt and Stresemann were the real “fascists,” what were the Social-Democrats implicated with them? In answering this question, Zinoviev brought together a rather mixed group—Marshal Joseph Pilsudski of Poland, like Mussolini a backsliding Socialist; Filippo Turati and Lodovico d'Aragona of Italy, two moderate Socialists (the latter but not the former later went over to Mussolini); a Socialist minister in the Bulgarian government of the day, who soon resigned; and J. Ramsay MacDonald, then about to form the first British Labour government. Zinoviev leaped from Germany to international Social-Democracy in a passage which contained the idea of social-fascism in essence, even if he inverted the term. As the first statement of the theory, it is worth giving in Zinoviev's own words, which I have tried to render as close as possible to his oratorical style:
What are Pilsudski and the others? Fascist Social-Democrats. Were they this ten years ago? No. It goes without saying that they were already then fascists in nuce. But they have become fascists precisely because we are living in the epoch of revolution. What is Italian Social-Democracy? It is a wing of the fascists; Turati is a fascist Social-Democrat. Could this statement have been made five years ago? No. Think of a group of academicians who gradually developed into a bourgeois force. Italian Social-Democracy is now a fascist Social-Democracy. Take Turati, D'Aragona, or the present Bulgarian government Socialists. There were opportunists, but could one say ten years ago that they were fascist Social-Democrats? No, that would have been stupid then. Now they are that.
But it was MacDonald who inspired Zinoviev to coin the phrase which summed up the theory of social-fascism in its first phase:
You may hurl insults at MacDonald: You are a traitor, a servant of the bourgeoisie. But we must understand in what period we are living. International Social-Democracy has now become a wing of fascism.5
The Comintern's resolution of January 19, 1924, on the German events followed this line: “The leading ranks of German Social-Democracy are at the present moment nothing but a part of German fascism in a Socialist mask.”6 At the Comintern's Fifth Congress in June—July 1924, Zinoviev repeated the charge that German Social-Democracy had been “converted into a wing of fascism.”7
Zinoviev, then, was the author or at least the first exponent of what was essentially the theory of social-fascism, even if he did not yet use the exact term. The theory was related to a use of the term “fascism” which was not aimed at Hitler at all; it was intended for the German government headed by Stresemann, whom no one ever again thought of as a fascist. Zinoviev's thinking was rooted in the Leninist tradition of regarding Social-Democrats, reformists, Mensheviks, and the like as “agents of the bourgeoisie” (as Lenin called the Mensheviks as late as 1922),8 not as a permissible tendency in the labor and revolutionary movements. When Zinoviev at the Comintern's Fourth Congress in November 1922 stigmatized the Second (Social-Democratic) International as “the chief support of the bourgeoisie,”9 without which capitalism would collapse, he was speaking as a good Leninist and in the spirit that was initially reflected in his theory of social-fascism. If the bourgeoisie was going fascist, it was unthinkable for him not to blame the Social-Democrats, a reflex in the Bolshevik movement for at least a decade. This is not to say that Lenin or Zinoviev would have driven the theory and practice of social-fascism as far as Stalin drove them later. But the original theory arose in Lenin's own lifetime and was sponsored by the man who had been his chief co-worker for the previous fifteen years. It seemed at first to be an adaptation, not an aberration, of orthodox Leninism.
In 1924, Stalin was Zinoviev's ally in the already raging internecine struggle against Trotsky in the Russian party. Stalin had not yet come forth with an original idea, and there is no reason to suppose that he was instrumental in giving birth to the theory of social-fascism. But two months after the Fifth Congress, Stalin took up this theme for the first time publicly and even added a literary embellishment.
Stalin's contribution occurred in an article entitled “Concerning the International Situation,” published on September 20, 1924, and famous because it was his first effort in this field. In it he wrote: “Social Democracy is objectively the moderate wing of fascism.” And furthermore: “They are not antipodes, they are twins.”10 Later the theory of social-fascism was traced back to this article in Communist references to its genealogy. The images, “wing” and “twins,” were repeated endlessly. Zinoviev's role was blotted out, perhaps to the benefit of his reputation.
The first to put the words “social” and “fascism” together in that order was apparently Heinz Neumann, one of Stalin's early German protégés. In the Comintern's German organ, dated October 7, 1924, Neumann contributed an article subtitled “The Newest Form of Social Fascism in Germany.” The subtitle referred to the formation by the German Social-Democratic party of the Reichsbanner Schwartz-Rot-Gold, a semi-military republican defense force named after the colors of the Republican flag. This type of organization was common in Germany at the time; the Communists themselves had one in their Roten Frontkämpferbund (Red Front Fighters League). By now, however, the idea was being applied somewhat indiscriminately, and Neumann proclaimed that “the Reichsbanner is the classic form of social-fascism.”11
For the next two years, Zinoviev still clung to what was essentially his brainchild. In the spring of 1925, he spoke of Italian fascism as “a synthesis of the capitalist bourgeoisie and Social-Democracy” and of “Social-Democracy as a wing of fascism.”12 At the end of 1925, he insisted that the “top layer” of Social-Democracy was correctly characterized as the “third party” of the bourgeoisie, the “left wing of the bourgeoisie,” and a “wing of fascism.”13 But after his break with Stalin and forced retirement from the Comintern in 1926, this line seems temporarily to have gone out of fashion. By applying it to figures as far apart as General von Seeckt, Marshal Pilsudski, Turati, and Ramsay MacDonald, Zinoviev made social-fascism into little more than a catch-all for Communism's enemies and opponents from moderate Left to far Right.
Zinoviev's successor in the Comintern, Nikolai Bukharin, was rather more relaxed, flexible, and opportunistic in his approach to the Social-Democratic movement. In the years of his leadership, 1926 to 1928, the Communists continued to fall behind the Social-Democratic parties and trade unions in membership and influence, and Bukharin encouraged efforts to decrease the gap between them and the Communists. Such a policy discouraged use of a term, “social-fascist,” that was anathema to the Social-Democrats.
Nevertheless, the theory of social-fascism began to make its comeback at the end of Bukharin's reign in the Comintern. In order to understand how this came about, it is well to recall two other terms on which Communist policy was then based—the “third period” and “class against class.” These seemingly technical terms may still, for those who remember, embody the terrifying reality of the years that enabled Hitler to take power.
The first to make its appearance was “class against class.” It was introduced at the Ninth Plenum of the Comintern in February 1928 (a “plenum” was an enlarged meeting of the top leadership or, in effect, a miniature world congress). The slogan signified that there were now only two classes facing each other in mortal combat—the proletariat and the bourgeoisie. The Communist parties alone represented the interests of the proletariat. All other parties, movements, and groups represented the bourgeoisie. Of the latter, the most dangerous were the Social-Democrats (they were still being called that, not “social-fascists”) and all species of “reformists.” This excommunication from the true family of the proletariat included not only the Social-Democratic parties but also the trade-union movements associated with them. “Class against class” was first applied in Great Britain, where it was taken to mean that the British Communists could no longer support the Labour party electorally. Thus the British Communist leaders were persuaded in Moscow to put up, for the first time, their own candidates against the Labour party.14
Even in Germany, which had the largest and strongest Communist party outside Russia, “class against class” meant that the Communists consigned to an enemy class the organizations which contained the vast majority of workers. In 1930, for example, the German Communist party reported a membership of 135,808; the German Social-Democratic party, 1,021,777. The Communist trade-union opposition claimed a following of 136,000. But the “free trade unions,” associated with the Social-Democratic party, contained 4,716,569 members; the so-called salaried employees (angestellten) unions, 1,620,970; the Christian (Catholic) trade unions, 778,863; and another group (Deutsche Gewerkvereine), 163,302. The Communist vote was about half that of the Social-Democrats—4,592,100 to 8,577,700. Even at the Communist high point and Social-Democratic low point in November 1932, the latter's vote was still considerably larger—7,248,000 to 5,980,200.15 Many more workers, of course, voted for the Catholic Center and other parties. The theory of social-fascism, then, put by far the largest number of organized workers into quasi-fascist parties which were so far gone that, for the Communists, there was only one thing to do—destroy them.
The “third period” completed theoretically what “class against class” began tactically. In brief, the post-World War I years were divided into three periods. The first, from 1917 to 1923, was that of a “revolutionary wave.” After 1923, the capitalist world entered a recovery phase, or what the Comintern called the “partial stabilization of capitalism.” At the end of 1927, Stalin suddenly announced that the second period was coming to an end, that capitalist stabilization was “collapsing,” and that the world was on “the eve of a new revolutionary upsurge.”16 The immediate result of Stalin's pronouncement was the “class against class” slogan early the following year. But the “third period” was officially unveiled at the Sixth World Congress in the summer of 1928 by Bukharin in a manner that displeased Stalin. Bukharin failed to emphasize the decline of capitalism strongly enough, and so the Tenth Plenum the following year took it up again and gave it a more Stalinist slant. Now the “third period” was defined in purely negative terms as one “leading inevitably” to imperialist wars, to great class conflicts, to a new revolutionary “upward swing,” and to “great anti-imperialist revolutions.”17
The third period was supposed to be one of revolution; it proved to be a period of rampant, barbarous counterrevolution; and for this miscalculation, the chief article of faith of the third period—the theory of social-fascism—was not a little responsible.
The second incarnation of the theory of social-fascism took place between the Sixth World Congress in the summer of 1928 and the Tenth Plenum in the summer of 1929.
Bukharin first broached the theme at the Sixth World Congress. He did so, however, with the same kind of hesitancy that marked his sponsorship of the third period at the same time. After saying that “there is not the slightest doubt that Social-Democracy reveals a social-fascist tendency,” Bukharin immediately cautioned that “this is merely a tendency and not a completed process, for it would be a mistake to lump Social-Democracy and fascism together.” The Congress's “theses” stopped somewhat short of this by declaring that Social-Democratic ideology “has many points of contact with fascism” and that the Social-Democratic parties employed fascist methods “in a rudimentary form.”18
Bukharin's dictum that Social-Democracy was revealing a “social-fascist tendency” was not followed up the rest of that year. In February 1929, however, a leading Russian member of the Comintern's Secretariat, Dmitri Z. Manuilsky, picked up the thread again by remarking that “economic democracy” was “the fascist slogan of the Social-Democracy.” But this was still an isolated reference. The first serious restatement of the theory apparently came the following month. It appeared in an article by a leading German Communist, Wilhelm Koenen, in the Comintern organ, International Press Correspondence, of March 8, 1929. In it, Koenen discussed alleged political pressure in Germany to transform the parliamentary Weimar regime into a dictatorship. Significantly, he never once mentioned Hitler or the National-Socialist party in this connection, as if they were too unimportant to be in his consciousness. Instead, he argued that Italian fascism might not be the model for German fascism in the sense that the latter did not need a “strong man a la Mussolini.” He immediately offered as an alternative road “the fascist tendency which the SPG [Social-Democratic party of Germany] leaders and the SPG trade-union bureaucracy is revealing more sharply every day.” He concluded: “Social-fascism is becoming more and more the open form of expression of the SPG.”19
This theme was then taken up in an unsigned article entitled “Social-Fascism in Germany” in the Comintern's theoretical organ, The Communist International, dated May 1929 and obviously issued earlier. The article was mainly based on events of the previous March and represents a political line probably adopted at about that time. This article elaborated on the idea presented by Koenen about the German road to fascism:
It would, however, be incorrect to conclude from this [criticism of the parliamentary regime], that Germany is directly faced with the establishment of a fascist government a la Mussolini. Even fascist methods are subject to the changes of time and circumstances, i.e., to the development of capitalism, and are adapted to the economic and political situation of the country in question. The great change that has taken place is the growth of fascism within social-democracy, and in German social-democracy particularly the German capitalists have found a strong support with increasingly definite fascist tendencies.
The article went on to charge that “the Social-Democrats are now concentrated on proving to finance capital that it can very well set up its dictatorship without attacking the Weimar consitution and the ‘foundations of democracy.’” And it added: “Thus, in every respect, a synthesis of social-democracy and fascism is provided for the regime, in a political form, of the dictatorship of finance capital.”20
Once such a line was adopted, occasions were not lacking for applying it, and the resulting incident was then used to justify the line. This pattern was followed in the spring of 1929.
Traditionally, the Berlin trade unions sponsored a single, united May Day demonstration. That year, however, the Communists made it known that they could not bring themselves to march, even under their own banners, in the same demonstration as the Social-Democrats. In March, the Prussian Social-Democratic Minister of the Interior, Albert Grzesinski, issued a warning against outdoor demonstrations and marches “which represent an immediate danger to public security,” aimed at both the Communists and right-wing nationalists. Afraid of possible clashes between rival street demonstrations, the Social-Democratic administration decided to ban all outdoor May Day demonstrations, Communist and Social-Democratic alike. Whether the situation justified this kind of precaution is extremely doubtful. Tactically the ban undoubtedly misfired. The Communists were spoiling for a fight, and the prohibition gave them a suitable occasion for it. Walter Ulbricht, then head of the Berlin-Brandenburg district of the party, later boasted that the May Day street fights and political strikes were “necessary prerequisites for bringing about an acute revolutionary situation.”21 In any event, the Communists decided to defy the ban; their militants clashed with the police who, from all reports, behaved with unnecessary brutality; blood flowed in the streets. In two working-class districts, Wedding and Neukölln, the Communists erected barricades which held for two days. The bulk of the Berlin working class remained aloof, but the Communists were strong enough to give the impression, for about three days, of a minor civil war in Berlin. In retaliation, the Social-Democratic administration outlawed the Communists' semi-military organization.
Thus the theory of social-fascism had inspired the Communists to separate themselves from the traditionally united May Day demonstration, which then resulted in sanguinary street battles that were used to confirm the validity of the theory of social-fascism. The official East German Communist history of the Weimar Republic characteristically reverses the order of events and makes the May Day street battles the reason for the Communist position that German Social-Democracy “had developed into social-fascism.”22 But this tragic May Day in Berlin was effect rather than cause. As we have seen, the revival of the theory of social-fascism had taken place before May Day of 1929, and no such event was needed to apply the same theory to Britain or elsewhere. An obscure Berlin Polizeipräsident, Carl Zörgiebel, became the symbol of the new archfiends—the “social-fascists.” Millions of people around the world who had only a vague idea of what had happened in Berlin shuddered at the mention of that peculiarly unprepossessing name. The German Communists never again took to the streets to battle the police and build barricades against the later Brüning, Papen, Schleicher, or Hitler regimes.
May Day 1929 was the high point of German Communist belligerency—against a Social-Democratic regime. It did not help matters that the same Social-Democratic regime in Prussia banned the right-wing semi-military Stahlhelm in October 1929 for holding extensive military maneuvers in violation of the constitution. In 1929 the theory of social-fascism said that the social-fascists were introducing fascism in Germany because the outright fascists were too weak for the task, and, therefore, the capitalists had elected to work through the social-fascists. Once this theory was implanted in the Communist movement, events could be used to bear it out, never to cast doubt on it.
In 1929, especially in the first half of the year, it was economically and politically premature to locate the threat of fascism in Germany in fascism itself. Unemployment rose gradually, but so did the average wage. The massive despair brought on by the world economic crisis took over only after the Wall Street crash in October of that year. Hitler's following was still much too limited to be considered threatening. He had succeeded in electing only twelve deputies out of a total of 491 in May 1928. The Social-Democrats had gained so heavily in that election that one of their most respected leaders, Hermann Müller, headed a coalition government formed in June. A Social-Democratic administration prevailed in Prussia, by far the largest state, containing three-fifths of the country's population.
Later, social-fascism was held responsible for the victory of German fascism on the ground that it had split the working class or had tolerated bourgeois regimes which paved the way for fascism. But this was not the way the theory of social-fascism was presented in 1929. It then insisted that social-fascism was the specific form fascism was actually taking.
In Britain, it was a good deal harder to work up the same kind of case against the bloodless MacDonald regime. In 1929, the British Communist party claimed no more than 4,000 members against over 3,000,000 for the Labour party. Some British Communist leaders were understandably reluctant to cut themselves off from the Labour party and to pretend that they, not the Labourites, represented the British working class. But this feat was accomplished, not without considerable prodding from the Comintern, by the simple expedient of reclassifying the Labour party as one of the three capitalist parties and, indeed, the worst of all.23 If social-fascism could be applied to Britain it could be applied everywhere—and was. The theory of social-fascism helped to bring about a catastrophe in Germany; it merely produced a caricature in Britain.
Yet Germany and Britain provided the main justification for the new line at the Tenth Plenum in July 1929. The first report, made by Otto Kuusinen, the loyal Finnish servitor of whatever Russian happened to rule in the Comintern, took the line that the difference between fascists and social-fascists was that the latter used a “smoke screen.” But, he went on, the more social-fascism developed, the closer it came to being “pure” fascism. He thought that British Labourism could be thought of as social-fascism “in the caterpillar stage” whereas German Social-Democracy was already in the “butterfly stage.” To unmask social-fascism, he said, was the most important duty. The second report, by one who spoke with even greater authority, Dmitri Z. Manuilsky, one of the three top Russians in the secretariat, stated that the German Social-Democratic party was already ready to establish an “open bourgeois dictatorship” by itself. Béla Run, then the ranking Hungarian member in the Comintern hierarchy, raised the possibility that social-fascism might be the typical form of fascism in the more advanced capitalist countries. In any event, he declared, any struggle between social-fascism and fascism was merely a struggle “between two methods of fascisation.” The Russian leader of the world Communist trade-union movement, Solomon A. Lozovsky, took to task the idea, which he said was very widespread in Communist circles, that the broad masses of Social-Democracy were less reactionary than their leaders. He insisted that the leaders, top, middle, and bottom, and even some of the rank-and-file, with the exception of some insignificant groups, were going fascist.24
If this was madness, it was methodical. Everything flowed from the proposition that the capitalist world was teetering on the brink of collapse. Therefore, a new revolutionary wave was imminent. Therefore, the Communist parties had to prepare themselves to “fight for power.” Therefore, those who could be counted on to oppose all power to the Communists were agents and representatives of the bourgeoisie. Therefore, the Communists viewed as their greatest enemies their rivals for the support of the working class—the Social-Democrats. Therefore, the destruction of the mass base of the Social-Democratic parties and allied trade unions became the key to the coming struggle for power.
Hitler was not yet dreamt of in this philosophy. The Communists in 1929 were concerned with the fact that the German government was then headed by a Social-Democrat, Hermann Müller, and the British government for the second time by a Labourite, J. Ramsay MacDonald. The official Protokoll of the Tenth Plenum contains 953 pages. Its index lists Hitler twice—once in reference to 1920-23 and again to 1920-21. Müller gets six listings, and MacDonald twenty-one. The main theme of the Plenum was why they were far more dangerous than the “open” fascists who had not yet any great strength in Germany or Britain.
I have dwelt on this 1929 version of social-fascism because it may have more than the later period to tell us today. It demonstrates that the theory did not originate in any real fear of the kind of fascism that Hitler represented. In Britain, the theory clearly lived a life of its own, with but the slightest respect for reality. In Germany, the best the Communists could do to breathe life into it was the May Day affair in Berlin, which the Communists themselves helped to provoke, which had nothing to do with fascists as such, and which proved to be a fairly isolated incident. The Berlin ban on street marches was soon lifted, and the anti-war demonstrations on August 1, 1929, went off without conflict. After Hitler came to power, the Müller government appeared in retrospect to be almost ethereally democratic, and many Germans, including the Communists, would have given virtually anything get it back.
In 1928-29, the theory of social-fascism derived from Communist doctrine, not from the existing reality of some tie-up between fascists and Social-Democrats. The doctrine said that capitalism was collapsing and a new revolutionary wave was about to flood over it. With Social-Democrats in office in two key countries, they and not the fascists or even the extreme right-wing nationalists seemed to be the barrier holding back the new wave. The Social-Democrats were all the more exasperating because a majority of the German and British working classes persisted in supporting them. Yet the real issue was not Social-Democracy. The enemy was something far more general and fundamental.
For the theory of social-fascism was based on the proposition that “bourgeois democracy” and “fascism” were merely different forms of the “dictatorship of the bourgeoisie.” One was “masked,” the other “naked.” The “democratic form” of the bourgeois dictatorship was considered by far the more dangerous and detestable of the two because it was supposedly harder to expose.
The real enemy, then, was “democratic forms.” The theory of social-fascism made the German and British Social-Democrats of the period the main carriers of this contagious disease. But they were not the only ones, and it could be applied to many others in different circumstances. In the United States, it was discovered in the “New Deal” of Franklin D. Roosevelt.25 In a later decade, similar thinking could make “liberals” stand-ins for Social-Democrats.
The rationale of social-fascism, then, explains why it could be whipped up in 1929 before the emergence of a major fascist threat. The theory was intrinsically designed to destroy the “democratic forms” of bourgeois society, not to hold back fascism. It succeeded in doing the former far better than it did the latter. It was intended to justify a Communist dictatorship in the name of the proletariat by making the alternative a “masked” Social-Democratic dictatorship or a naked fascist dictatorship, both equally in the interests of the bourgeoisie. If the only choice were between dictatorships, then the Communist variety would not appear to be so fearsome or so great a change.
That the theory of social-fascism was adapted to fighting democracy, not fascism, was soon demonstrated. In the parliamentary elections of September 1930, Adolf Hitler's National Socialist German Workers Party emerged for the first time as a major political force. The National Socialists, or Nazis, increased their 1928 representation in the Reichstag from 12 to 107, and their popular vote shot up from 809,000 to over 6,400,000.
Yet the Communists—who also gained, but much less—were not greatly alarmed. By using the terms “dictatorship” and “fascism” so loosely and broadly, to cover so much ground from Ramsay MacDonald to Adolf Hitler, they hopelessly confused what dictatorship and fascism were. In order to make their own dictatorship appear to be less fearsome and not so great a change, they performed the same service for Hitler's.
The Müller government had been a coalition of moderate parties excluding the extreme Right and extreme Left, as those terms were understood in Germany at the time. It was replaced in March 1930 by a government headed by the (Catholic) Center party leader, Heinrich Brüning, who replaced the Social-Democrats with right-wingers. This reshuffle forced the Social-Democrats into opposition just before Hitler scored his first great electoral success. As economic and political paralysis fastened on Germany, Brüning increasingly ruled by means of “emergency decrees” which solved nothing and satisfied no one. The “democratic forms” of the German Republic needed defense from every possible quarter long before Hitler came to power.
That the Communists should have underestimated Hitler's threat before September 1930 is understandable. But why afterward?
For one thing, the Communists had already decided by 1929 that the Müller government was introducing fascism. The chief Communist slogan in the September 1930 elections was: “Fight Against the Fascist Dictatorship—For the Dictatorship of the Proletariat.”26 After the election, with Brüning in power, the German Communist organ, Die Rote Fahne, announced: “The bourgeois-democratic state form of the German Republic has ceased to exist. We have a fascist Republic.”27
If Germany was going fascist under Müller and Brüning, Hitler was not needed to do the job. The Social-Democrats, in effect, made Brüning the “lesser evil,” and the Communists made him the greater one. After the September 1930 elections, the Social-Democratic leaders decided on a policy of “toleration” vis-à-vis the Brüning regime on the theory that the alternative was a Nazi takeover.28 In retrospect, this decision was probably one of the fatal miscalculations; it appears to have been based on little more than an abdication of responsibility and failure of will. For the next year and a half, it made the Social-Democrats, however heavy of heart, tacit accomplices of Brüning's “presidential government,” which drifted farther and farther away from what had been a parliamentary regime. On the other hand, the Communists went to the opposite extreme and made Brüning so fascist-minded that his replacement by Hitler was unnecessary. Or, in the words of the German Communist leader, Ernst Thälmann: “The more energetically we unmask the nature of the fascist policy of the Brüning government, the more convincingly we prove to the masses that this bourgeois government is itself striving for the actualization of the fascist dictatorship, and need not be replaced by Hitler or [Alfred] Hugenberg [then leader of the extreme right-wing Nationalists], as far as this is concerned, then the more thoroughly do we refute and shatter Social-Democratic agitation, etc., etc.” (my italics, T.D.).29
The obvious alternative to Social-Democratic “toleration” of Brüning would have been some measure of Social-Democratic-Communist collaboration, or at least toleration. After 1930, the Social-Democrats and Communists had between them over one-third of the votes and almost two-fifths of the seats in the Reichstag. In November 1931, one of the foremost Social-Democratic spokesmen, Rudolf Breitscheid, made an overture to the Communists to reach an understanding. Thälmann brushed off the offer as “a new demagogic maneuver.” Die Rote Fahne called it “a cunning game” and demanded: “Intensification of the fight against the Social-Democracy along the whole line.”30 This appears to have been the last time the breach between the two parties might have been healed.
For another thing, the theory of social-fascism made Hitler weaker than either Müller or Brüning. This conclusion inevitably followed from the proposition that a “masked” bourgeois dictatorship was harder to overthrow than a “naked” one. At the Tenth Plenum, Kuusinen rebuked those Communists who still thought that fascism might not weaken the bourgeoisie. “In reality,” he instructed them, “the fascisation of the state regime is absolutely no indication that the position of the bourgeoisie is being strengthened.”31 In Germany, where it counted most, the Communists went farthest in discounting the fascist danger. Thälmann assured the Eleventh Plenum in June 1931 that Hitler had reached the high point of his influence at the September 1930 elections and could only go downwards. The fascist offensive, he said, was merely a “secondary fact” that reflected the “revolutionary upsurge,” and, therefore, a sign that the proletarian revolution was reaching “a higher stage of development.”32 Another outstanding German Communist leader, Hermann Remmele, declared in the Reichstag on October 14, 1931: “We are not afraid of the fascists. They will shoot their bolt sooner than any other government.”33
Inherent in this fatal reasoning was a still more suicidal implication—that Hitler was “unconsciously” serving the cause of the proletarian revolution by tearing the mask away from bourgeois democracy. However reactionary he appeared to be, according to this logic, his historic role was “objectively” revolutionary. In this way, as we shall see, the Communists actually rationalized the accession of Hitler to power. But it was built into the very fabric of the theory of social-fascism. It was impossible to maintain that “democratic forms” were the main enemy, that Hitler's predecessors were already introducing fascism, that Hitler excelled all of them in tearing the mask away from bourgeois democracy, that the Nazi regime was the weakest form of bourgeois dictatorship—without preferring Hitler to Müller or Brüning, without making Hitler's victory into a quasi-victory for the proletarian revolution, and without making Hitler do the work of the Communists, “unconsciously” and “objectively.”34
The pre-Hitler regimes of 1930-32 present a real problem. Brüning's regime, and even more so the two that followed in the last half of 1932, headed by the execrable Franz von Papen and the futile General Kurt von Schleicher, were no longer functioning democracies even by German standards. The question arises whether the distinction between democratic and undemocratic governmental forms becomes less or more important in precisely such a pre-fascist period. In practice, as Brüning, Papen, and Schleicher whittled down the democratic structure, the differences among democratic forms, authoritarian forms, “presidential” forms, dictatorial forms, and fascist forms became increasingly blurred, for one reason because no one had yet lived through the type of fascism that Hitler produced. But they could not have been blurred at all if there had not been any real differences among them. Those responsible for blurring these “forms” were Weimar's guilty men; those responsible for denying the real differences among them were no less guilty. Both helped fascism take power.
The lesson would seem to be that it is dangerous to use the term “fascism”—or today “totalitarianism”—too lightly and too indiscriminately. The problem is how to preserve a very sizable margin of difference in order to make room for the full enormity and horror of fascism in power. To reduce this margin is to make fascism more familiar, more tolerable, more domesticated. By making fascism cover all the ground from Müller to Hitler, the Communists demonized the inoffensive Müller and humanized the demonic Hitler.
In the two years before Hitler came to power, the theory of social-fascism managed to remain virtually intact.
In 1931, some mistakes were noted, some criticisms made. The mistakes, “in the main,” the Eleventh Plenum ordained, “consist of drawing, after the liberal fashion, a contrast between fascism and bourgeois democracy and between the parliamentary form of the dictatorship of the bourgeoisie and its open fascist forms.” It was, in fact, not merely liberal, it was specifically Social-Democratic to draw “a contrast between the ‘democratic’ forms of the dictatorship of the bourgeoisie and fascism.”
This criticism served to reinforce rather than weaken the basic idea of social-fascism. The other criticisms were purely tactical in nature. One advised against completely identifying social-fascism with fascism. A second admonished against completely identifying “the social-fascist upper stratum with the rank-and-file Social-Democratic masses of workers.” The latter criticism had proven especially costly in Germany, where the Communists had attacked the Social-Democratic rank and file as “little Zörgiebels.” These criticisms indicated that such complete identifications had been common for the past two years.
But these were changes in nuance, not in substance. They were designed to make it easier to expose, isolate, and overcome Social-Democracy, which was given as the “immediate task” of the Communist parties. The main report to the Plenum, delivered by the chief Russian member of the Comintern's secretariat, Manuilsky, still accused German Social-Democracy of “striving to usher in the fascist dictatorship by the ‘dry road,’” and international Social-Democracy of assisting the bourgeoisie “to establish the fascist form of dictatorship.”35 In the 121 pages of Manuilsky's report, Hitler was not mentioned even once. The “theses” of the Plenum mentioned Hitler only once in passing and in parentheses, but devoted pages to the need for destroying Social-Democracy. Hitler could be virtually ignored if it was true, as the theses said, that “the successful struggle against fascism in Germany calls for the timely exposure of the Brüning government as the government which is introducing the fascist dictatorship.”36 If ruling by emergency decrees was all there was to fascism, one did not need to wait for the outlawing of all other parties, Gleichschaltung, concentration camps, Führerprinzip, racial doctrine, Lebensraum, genocidal anti-Semitism, and all the rest.
By the end of 1931, the German Communists were sufficiently impressed with the threat of fascism to admit that it might be playing an offensive as well as a defensive role. “We have regarded fascism, including the growth of the National Socialist movement, too one-sidedly and too mechanically, only as the antithesis of the revolutionary upsurge, as the defensive action of the bourgeoisie against the proletariat,” Thälmann self-criticized himself. This view of fascism was correct but inadequate, he allowed. “We have not taken sufficiently into account the fact that fascism bears within it two elements, the element of the offensive of the ruling class and also the element of its disintegration; that the fascist movement can lead to a victory of the proletariat, as well as to a defeat of the proletariat.”37 It had taken only three years of the most intensive application of the theory of social-fascism to get this concession from him. Hitler in power was little more than a year away.
Yet 1932 made little difference to the theory. The Reichstag elections in July of that year gave the Nazis 230 seats, a gain of 123, and doubled their popular vote. The Social-Democrats went down to 133 seats, a loss of ten. The Communists gained twelve, from 77 to 89. Without the Nazis or Communists, a majority government had become impossible, and neither could or would take part. Brüning had already given way to von Pa-pen who was willing to take in Hitler as vice-chancellor, and Hitler would not settle for anything less than total power.
At this juncture, in September of that cursed year, the Twelfth Plenum met in Moscow. According to the Comintern's spokesman, Kuusinen, the “revolutionary upsurge” had moved on to an even higher stage. The only united front was the “united front from below” which he defined as one between the “Communist vanguard” and the non-revolutionary masses for the purpose of isolating the Social-Democratic “agents of the bourgeoisie.” He admitted that “for a long time, the Communist party of Germany underrated the National Socialist movement; and in part neglected to struggle against it.” But social-democracy was still “the main social support of the bourgeoisie” and “we ought to direct our main offensive against social-democracy.” The only concession lie would make was that this offensive should be waged “in such a way that we may win over the Social-Democratic workers.” In his concluding remarks, four months before Hitler took power, he reiterated: “The main blow, as I have already stated in my report, must in the present period of preparing for the revolution be directed against social-fascism and the reformist trade-union bureaucracy” (italics in original).38
Soon the Communists had their wish. Two elections were held in 1932, on July 31 and on November 6. In the latter, the Social-Democrats lost ground, from 133 Reichstag seats to 121, and from a total vote of 7,959,700 to 7,248,000. The Communists gained almost as much, from 89 seats to 100, and from 5,282,600 votes to 5,980,200. For the first time in four years, the Nazis fell back, from 230 seats to 196, and from 13,745,800 votes to 11,737,000. Between them, the Social-Democrats and Communists still managed to hold well over one-third of the total vote. The Nazis were slipping, and a real Social-Democratic-Communist united front might conceivably have blocked the way to Hitler's power.
But the theory of social-fascism held firm. In its post-election statement, the Central Committee of the Communist party of Germany declared: “The decline of the Social-Democratic party in no way reduces its role as the main social buttress of the bourgeoisie, but on the contrary, precisely because the Hitler party is at present losing followers from the ranks of the workers, instead of penetrating still more deeply into the proletariat, the importance of the Social-Democratic party for the fascist policy of finance capital increases.”39
Ten weeks later, on January 30, 1933, Adolf Hitler gained power.
Twenty-one years later, Walter Ulbricht, the present master of East Germany, admitted that the Communists had concentrated their main fire on the Social-Democrats, not on Hitler, Brüning, Papen, or Schleicher, “without sufficiently distinguishing between the Social-Democratic leadership and the Social-Democratic membership.”40 In all those years, Ulbricht could think of nothing else that had been wrong with the theory of social-fascism.
The reader may have had enough. But there is reason for not stopping here with the career of this seemingly incredible theory. In order to grasp how truly perverse and pertinacious it was, it is necessary to follow its course to the end. For unlikely as it may seem to those who did not live through it, the theory of social-fascism lived on after Hitler took power.
For this purpose, I have made up a little anthology that takes the subject into 1934. The various items require little comment, and I have merely grouped them under appropriate subject headings. All of these quotations have been taken from the most authoritative Communist sources and spokesmen for a period of over a year after January 1933.
The Revolutionary Upsurge
“The fact of the Hitler government coming into power enormously accelerates the maturing of the revolutionary crisis in Germany. Germany is on the threshold of a revolutionary crisis (italics in original).”41
“The fascist dictatorship is not only incapable of solving the social and national conflicts, but it is also incapable of really consolidating its political rule.”42
“In spite of the most ruthless and bloody terror, a revolutionary upsurge is growing among the working class, which is completely deprived of all rights by fascism.”43
“After the establishment of the fascist dictatorship, the revolutionary mass movement is experiencing a fresh upsurge.”44
“The revolutionary uprising of the German working class—that is the perspective in Germany.”45
“The present stage in Germany, in Austria, is no longer simply a period of struggle to win over the majority of the working class, but a period of the formation of a revolutionary army for decisive class battles for power, a period of the mobilization of such cadres as are prepared to make any sacrifice in order to destroy the existing regime, in order to lead the proletariat to victory.”46
The Usefulness of Fascism
“The establishment of an open fascist dictatorship, by destroying all the democratic illusions among the masses and liberating them from the influence of Social-Democracy, accelerates the rate of Germany's development toward proletarian revolution.”47
“The bourgeoisie is compelled to abandon the democratic façade and to put the naked dictatorship of violence in the foreground. This development makes it easier for those carrying out a correct, united front, anti-fascist policy to overcome the illusions, which have been fostered by Social-Democracy for decades, with regard to the role of the State, and with regard to economic democracy and the policy of the ‘lesser evil.’”48
“Even fascist demagogy can now have a twofold effect. It can, in spite of the fascists, help us to free the masses of the toilers from the illusions of parliamentary democracy and peaceful evolution . . . ,”49
“The rapid fascisation of the capitalist governments naturally confronts us with added difficulties, but the bitterness of class antagonisms and the complete bankruptcy of the Second and Amsterdam [trade union] Internationals offer us tremendous new possibilities” (italics in original).50
“The present wave of fascism is not a sign of the strength, but a sign of the weakness and instability of the whole capitalist system. . . . Germany was and remains the weakest link in the chain of imperialist states. . . . That is why the proletarian revolution is nearer in Germany than in any other country.”51
“Fascism does not only make the struggle of the working class more difficult; it also accelerates the processes of the maturing of the revolutionary crisis.”52
The Main Enemy
“The Social-Democracy proves once again that it is inseparably allied with capitalism, that it still remains the chief buttress of the bourgeoisie, even when the latter go over to measures of open violence, including repressive measures against Social-Democracy.”53
“If the fascists are persecuting Social-Democracy as a party, they are beating it as a faithful dog that has fallen sick. They are beating it because they know that it is incapable of resistance, that, when it is beaten, it will come forward all the quicker to the service of the bourgeois dictatorship, even in the open fascist form.”54
“The complete exclusion of the social-fascists from the state apparatus, and the brutal suppression even of Social-Democratic organizations and their press, does not in any way alter the fact that Social-Democracy is now, as before, the chief support of the capitalist dictatorship.”55
“History now offers a real possibility of liquidating the mass influence of the Social-Democratic party, which is responsible for the victory of fascism and which is the main support of the bourgeoisie, and the possibility of establishing the unity of the labor movement.”56
“Social-Democracy continues to play the role of the main social prop of the bourgeoisie also in the countries of open fascist dictatorship.”57
“In spite of all their disagreements, the fascists and social-fascists are, and remain, twins, as Comrade Stalin remarked. . . . There are no disagreements between the fascists and the social-fascists as far as the necessity for the further fascisation of the bourgeois dictatorship is concerned. The Social-Democrats are in favor of fascisation, provided the parliamentary form is preserved.”58
“Even after the prohibition of its organization, Social-Democracy remains the main social prop of the bourgeoisie. . . . The present situation [December 1933] in the German labor movement offers us the possibility of destroying the mass influence of the SPG [Social-Democratic party of Germany] and of reestablishing the unity of the labor movement on a revolutionary basis.”59
“Every revolutionary must know that the path toward the annihilation of fascism, the path to the proletarian revolution and to its victory can only be the path that leads via the organizational and ideological abolition of the influence of Social-Democracy.”60
“It is, therefore, necessary above all to make a clear stand in regard to Social-Democracy, and first and foremost in regard to ‘Left’ Social-Democracy, this most dangerous foe of Communism” (italics in original).”61
“We must destroy the Social-Democratic influence on the working masses and we must not tolerate any vacillations in our ranks in the struggle against the Social-Democracy as the chief social support of the bourgeoisie.”62
I hope the reader has not skipped too quickly over this collection of seemingly quaint, musty quotations. Not so long ago, men paid for them with their lives, Communists and Social-Democrats alike. In March 1933, the “mask” was finally torn from the Weimar constitution. A newly elected Reichstag voted, 441 to 94, to give Hitler dictatorial powers. All 94 negative votes were cast by Social-Democrats (the remaining 27 Social-Democratic deputies and all 81 Communists could not vote, being already in exile, in hiding, or under arrest). The Communist party was officially outlawed on March 31; the trade unions were smashed in May; the Social-Democratic party was banned on June 22. Thereafter, Hitler made no distinction between Communists and Social-Democrats; he took their lives, cast them into concentration camps or, if they were lucky, drove them into exile, impartially.
Yet the theory of social-fascism survived many more months. It was finally discarded in 1934 in order to make way for the Popular Front line adopted the following year. At the Seventh World Congress in July-August 1935, speakers admitted that it had been a mistake to hold the view that the Müller government had worked for fascisation and that the Brüning government was already a “government of fascist dictatorship,” to have underrated the Nazi movement on the assumption that it could not take power, to have concentrated the main fire against Social-Democracy instead of the growing menace of fascism.63 These mea culpas quietly interred the theory of social-fascism which then became so embarrassing that the Communist movement has gone to extraordinary lengths to expunge it from the historical record. There is almost nothing in its entire history that the Communist movement is more ashamed of and so unwilling to defend.
But this was no ordinary aberration, and it demands far more study and reflection than it has received. Hitler's accession to power in January 1933 was the decisive dividing line, the crucial turning point, of the inter-war years. It led directly to World War II, from which our most oppressive and intractable international problems still derive. The responsibility for Hitler's victory was undoubtedly widespread. I know of no party, no economic interest, no secular or religious group, and no foreign country, including our own, which can escape some measure of culpability. But of all of them, the theory and practice of social-fascism was the most devastating, the most unnecessary, and the most self-destructive.
The problem it raises is: What are the limits, if any, of criticism and opposition in a democratic or, if you will, a “bourgeois-democratic” society—even from a revolutionary standpoint?
It was one thing to criticize the Social-Democrats for banning the 1929 May Day street demonstrations or the Brüning regime for governing so highhandedly. There was a sense, I believe, in which it could be reasonably argued that such policies undermined or endangered the Republic and made it more vulnerable to Hitlerism. But it was quite another thing to charge that these policies proved the Social-Democrats and the Bruning regime were themselves “introducing fascism” or “masked” forms of fascism. This type of criticism could only aim at bringing the democratic house down on all alike, including its revolutionary critics.
Such critics could not be interested in whether wrong policies undermined or endangered the Republic; they were themselves doing all in their power to undermine and endanger the Republic. Indeed, they assumed that the Republic was a greater enemy than anything that could follow it. They were chiefly concerned with drawing a line of blood between themselves and all others to the “right” of them, including the most “left-wing” of the Social-Democrats. This line made sense only on the assumption that the Communists were going to seize power themselves. In this case they knew that they were going to suppress Social-Democrats as well as Nazis, as the Russian Bolsheviks had suppressed Social-Democrats as well as Tsarists. The theory of social-fascism was a rationalization of Communist dictatorship in the guise of rationalizing everyone else into a variety of fascism.
The Communists gained ground in Germany from 1928 to 1932. But they never came close to winning a majority of the German working class, let alone a majority of the German people. In order to make their bid for power, they opened a chasm between themselves and the rest of the German working class and most of the German people, which, once they realized that their bid had failed, they could not close. They tried vainly in the last half of 1932 to tinker with the practical implications of the theory of social-fascism, but it was always too little and too late. Then they paid as heavily as or even more heavily than those whom they had once defiled as “social-fascists” and whose cooperation they were ultimately forced to seek.
By 1935, the German Communist leader, Wilhelm Pieck, had to avow that “we Communists fight with might and main for every scrap of democratic liberty,” and the new head of the Communist International, Georgi Dimitrov, gave assurances that “in the capitalist countries we defend and shall continue to defend every inch of bourgeois-democratic liberties which are being attacked by fascism and bourgeois reaction, because the interests of the class struggle of the proletariat so dictate.”64 Whatever these words were worth for the future, they were a pitiless commentary on the Communist past. An official obituary was never written for the theory of social-fascism; it was buried silently, furtively, and shamefully, as if its very name would dishonor those who might utter it.
This was how the original theory of social-fascism came to an end. It amounts to a case history of an extraordinary political aberration. And this is precisely what is so important and fascinating about it. Other movements, other revolutionary movements, have shown an amazing devotion to fanciful and self-defeating ideas. But these traits have usually marked relatively small movements which harmed themselves more than anyone else. There is hardly a comparable example in this century of a great movement—and the Communist movement may well be the greatest historical phenomenon specifically of the 20th century—in the grip of a political pathology capable of causing such havoc to itself and to so many others on such a monstrous scale. So extreme a divorce between ideology and reality deserves far more attention than it has received. It may be especially commended to the attention of those who are flirting with a new anti-liberal version of the theory of social-fascism.
1 In his earlier book, World Communism, the late Franz Borkenau wrote that “here and there the idea had been raised within the Communist ranks that a fascist policy could be carried through by a socialist party,” but this refers to the careers of Mussolini and Pilsudski. Borkenau then places “social-fascism” itself in 1929 (Norton, 1939, pp. 341-42). In his later book, European Communism, Borkenau flatly declared that “in 1929, it was discovered that the Social-Democrats were—'social-fascists' “(Harper & Row, 1953, p. 70). The year 1929 was significant because it was the year Nikolai Bukharin was removed from the Comintern and Stalin took over completely. Günter Nollau states: “The Social-Democrats, so ardently courted by the Communists from 1924 to 1926, were known as ‘Social Fascists’ from 1929 onwards” (International Communism and World Revolution, Praeger, 1961, p. 108). Isaac Deutscher also identified “social-fascism” with Stalin's removal of Bukharin from the Comintern in 1929 (Stalin, Vintage Books edition, 1961, p. 405). Ruth Fischer said that the “new theorem of ‘social-fascism’ which Stalin enunciated in person” came in 1929-1933 (Stalin and German Communism, Harvard University Press, 1948, p. 655). The only book I have found which does not seem to have made this mistake is C. L. R. James, World Revolution 1917-1936 (Seeker & Warburg, 1937, pp. 309-10). An early article by Sidney Hook, “The Fallacy of the Theory of Social Fascism” (The Modern Monthly, July 1934), is still worth reading. It has been reprinted in The Anxious Years, Louis Filler, ed. (Capricorn Books, 1963, pp. 319-35).
2 Resolutions and Theses of the Fourth Congress of the Communist International (Communist Party of Great Britain, 1923), p. 105.
3 Introduction by Earl Browder to Andrés Nin, Struggle of the Trade Unions Against Fascism (Trade Union Educational League, 1923), p. 6. Nin left the Spanish Communist party in 1931 or 1932 and later became a leader of the revolutionary but anti-Communist POUM in Catalonia. He was assassinated by Soviet secret police agents in Spain in 1937 in the midst of the Civil War (Hugh Thomas, The Spanish Civil War, Harper Colophon edition, 1963, pp. 452-55). As for anticipations of the idea of social-fascism before 1924, this is a neglected field and more intensive research may turn up others. The two given, however, indicate that the essential idea must have developed between 1922 and 1924.
4 Ossip K. Flechtheim, Die KPD in der Weimarer Republik (Bollwerk Verlag, 1948), p. 102.
5 Die Lehren der deutschen Ereignisse (Verlag Carl Hoym Nachf., 1924), pp. 69-70. This pamphlet was originally issued for party members only.
Zinoviev subsequently published a “Preliminary Draft Proposal for Theses on the German Question” as an article in The Communist International (No. 2, new series, undated, but probably February 1924). It contained a section on the same theme, with somewhat different phraseology and personal references (the Italian Modigliani and the Germans Ebert and Severing were added). Among the formulas in this article were: “In its gradual degeneration, the entire international social-democracy has become objectively nothing but a variety of fascism” (p. 93), and “the leading strata of German social-democracy have themselves turned fascist” (P 97).
The career of Marshal Pilsudski illustrates how difficult it was to apply this line. After classifying him as a fascist, the Polish Communists made common cause with him and assisted his coup of May 1926, not without the Comintern's knowledge (M. K. Dziewanowski, The Communist Party of Poland, Harvard University Press, 1959, pp. 118-19).
6 Ibid., pp. 105-6.
7 G. Sinowjew, Die Weltpartei des Leninismus (speeches at the Fifth Congress) [Verlag Carl Hoym, 1924], p. 40.
8 V. I. Lenin, Selected Works, Vol. X, p. 310 (March 1922).
9 Protokoll des Vierten Kongresses der Kommunistischen Internationale (Verlag der Kommunintischen Internationale, 1923), p. 63.
10 Stalin, Works (Foreign Languages Publishing House, [Moscow], 1953), Vol. VI, pp. 294-95.
11 Internationale Presse-Korrespondenz (Berlin), October 7, 1924, pp. 1724-25. Stalin's article on the “International Situation” appeared in the same organ dated September 30, 1924, only about a week earlier. The English version of Neumann's article may be found in International Press Correspondence, October 23, 1924, pp. 838-39, with the subtitle as title.
12 Protokoll der Erweiterten Exekutive, March-April 1925 (Carl Hoym Nachf., 1925), p. 40.
13 Protokoll der Erweiterten Exekutive, November-December 1926 (Carl Hoymn Nachf., 1927), p. 563.
14 The process of persuasion may be followed in Communist Policy in Great Britain (Communist Party of Great Britain, 1928), which gives the main speeches and resolutions. However, the first use of “class against class” seems to have come in France in an “Open Letter to Party Members” in l'Humanité of November 24, 1927. But it was here intended for the opposite purpose—to induce the French Socialist party to enter into an electoral alliance.
15 These figures are taken from an East German Communist source: Siegfried Vietzke and Heinz Wohlgemuth, Deutschland und die deutsche Arbeiterbewegung in der Zeit der Weimarer Republik 1919-1933 (Dietz Verlag, 1966), pp. 324-25, 329.
16 Stalin, Works (Foreign Languages Publishing House [Moscow], 1954), Vol. X, pp. 291, 297.
17 Protokoll: 10. Plenum des Exekutivkomitees der Kommunistischen Internationale, 1929, p. 412.
18 International Press Correspondence, September 4, 1928, p. 1039 (Bukharin) and November 23, 1928, p. 1571 (theses).
19 Ibid, February 22, 1929, p. 140 (Manuilsky), and March 8, 1929, p. 227 (Koenen).
20 “Social-Fascism in Germany,” The Communist International, May 1929, pp. 529-30.
21 Protokoll des 10. Plenums, op. cit., p. 368. This passage may also be found in Ulbricht's collected works, Zur Geschichte der deutschen Arbeiterbewegung (Dietz Verlag, 1954), Vol. I, p. 444. Otherwise, this speech has been ruthlessly cut and bowdlerized to remove all traces of the term “social-fascism” which was generously sprinkled about in the original. Without any warning to the reader, the material in this volume, supposedly documentary, has been altered to conform to the postwar line, and the tell-tale words, “social-fascist,” have been changed to the more respectful “Social-Democratic.”
22 Vietzke and Wohlgemuth, op. cit., p. 167.
23 In his autobiography, Serving My Time (Lawrence & Wishart, 1940), Harry Pollitt, the long-time general secretary of the British party, tells how he ran against Ramsay MacDonald in the 1929 election. The text of his election address, given in full in the book, makes little sense without some reference to the “third period,” “class against class,” and “social-fascism,” which Pollitt carefully avoided mentioning by the time the book was published. This address stated in the true style of its period: “The Labour party is the most dangerous enemy of the workers because it is a disguised party of capitalism” (italics in original). The vote in Seaham Harbour, a largely miners' constituency, was 35,615 for MacDonald, candidate of the disguised party of capitalism, and 1,451 for Pollitt, candidate of the only party of the proletariat.
24 Protokoll: 10. Plenum des Exekutivkomitees der Kommunistischen Internationale, July 3-19, 1929 (Verlag Carl Hoym Nachf., 1929/?), pp. 39-40 (Kuusinen); 63 (Manuilsky); 191 (Bela Run); 390-91 (Lozovsky). It should be noted, as a curiosity, that some of the foremost Comintern leaders earned their credentials as refugees from revolutionary failures. Kuusinen came to Russia after the Finnish defeat of 1918, Béla Kun after the Hungarian fiasco of 1919, and the same was true in different circumstances of the Italian, Palmiro Togliatti, then known as Ercoli, and the Bulgarian, Georgi Dimitrov.
25 “It is now necessary to point out that the Roosevelt ‘new deal’ program represents not only the strengthening of the open fascist tendencies in America, but also that it is quite consciously and systematically supporting and developing social-fascist ideas, organizations, and leaders. Roosevelt has a very special need for the social-fascists” (The Communist, August 1933, p. 734).
26 Flechtheim, op. cit., p. 164.
27 Die Rote Fahne, December 2, 1930, cited in International Press Correspondence, December 4, 1930, p. 1125.
28 Das Ende der Parteien 1933, edited by Erich Mattias and Rudolf Morsey (Droste Verlag, 1960), pp. 105-9. For good reason the Social-Democratic leader, Otto Wels, confessed in August 1933 that his party had been “driven” by events more than any other party and had been “really only an Objekt of developments” (p. 101).
29 International Press Correspondence, June 30, 1931, p. 611.
30 The Communist International, December 15, 1931, p. 717 (Thalmann); International Press Correspondence, November 19, 1931, p. 1056 (from Die Rote Fahne).
31 Protokoll des 10. Plenums, op. cit., p. 38.
32 International Press Correspondence, June 30, 1931, pp. 607 and 612.
33 Evelyn Anderson, Hammer or Anvil (Left Book Club 1935), p. 144.
34 The idea that the fascists might in some way help the revolution goes back to Lenin. In his speech to the Comintern's Fourth Congress on November 13, 1922—the next-to-last of his life—Lenin remarked: “Perhaps the fascists in Italy, for example, will render us a great service by explaining to the Italians that they are not yet sufficiently enlightened and that their country is not yet insured against the Black Hundreds. Perhaps this will be very useful” (Selected Works, Vol. X, p. 333). The “Black Hundreds” were extra-legal armed bands organized in 1905 to defend the Tsarist regime. Lenin's statement was made only two weeks after Mussolini's takeover, before he or anyone else had much experience with fascism in power. Nevertheless, Stalin took the same line, much more strongly, and with much less excuse, eleven years later after Hitler took power.
35 D. Z. Manuilsky, The Communist Parties and the Crisis of Capitalism (Modern Books, 1931), pp. 37 and 73.
36 XIth Plenum of the Executive Committee of the Communist International: Theses, Resolutions and Decisions (Modern Books, 1931), pp. 8 (Hitler); 9 (main mistakes); 15-16 (immediate task); 18 (tactical criticisms).
37 Ernst Thälmann, International Press Correspondence, December 10, 1931, p. 1137, from a condensed version of an article in Die Internationale, November-December 1931 (italics in original).
38 O. Kuusinen, Prepare for Power (Workers Library Publishers, 1932), pp. 35, 85-87, 96, 106-7, 141. The same line was taken by the theses and resolutions, Capitalist Stabilization Has Ended (Workers Library Publishers, 1932), pp. 10-13.
39 International Press Correspondence, November 17, 1932, p. 1100.
40 Walter Ulbricht, op. cit., p. 455. This admission was slipped into a special Author's Note of one-half page to say something about “social-fascism” in Ulbricht's collected works. It apparently serves the function of covering up for the omission of the term where it should have appeared in what is, after all, a collection of documents, in order not to open Ulbricht to the charge that he did not mention it at all. But this confession did not inhibit the East German Communist historians, Vietzke and Wohlgemuth, from claiming that the German Communist party genuinely changed its united front line in an appeal for “anti-fascist action” dated May 25, 1932 (op. cit., pp. 261-64). Unfortunately for this claim, they give the complete text of the appeal in an appendix, and thereby spoil the effect. Among other things, the appeal stated that “only the Communist Party stands at the head of the anti-fascist struggles of the German working class and fights for your demands, etc.” (p. 518). This was typical, of course, of our old friend, the “united front from below,” which was meant to take away the rank and file from the Social-Democratic party and put it under Communist leadership.
41 V. Knorin (head of the Central European secretariat of the Comintern), International Press Correspondence, March 9, 1933, p. 263.
42 “Resolution of the CC of the Communist Party of Germany on the Situation and the Immediate Tasks,” ibid., June 2, 1933, p. 529.
43 “The Present Situation in Germany and the Tasks of the CPG,” ibid., October 27, 1933, pp. 1040-41.
44 Ibid., November 3, 1933, p. 1065.
45 Wilhelm Pieck, ibid., January 30, 1934, p. 116.
46 V. Knorin, ibid., April 23, 1934, pp. 634-35.
47 “The Situation in Germany,” Resolution of the Presidium of the ECCI (Executive Committee of the Communist International), adopted April 1, 1933, ibid., April 13, 1933, p. 378.
48 Resolution of the CC of CPG, ibid., June 2, 1933, p. 527.
49 Kuusinen, ibid., January 30, 1934, p. 109.
50 Lozovsky, ibid., March 19, 1934, p. 474.
51 V. Knorin, ibid., April 23, 1934, p. 635.
52 Manuilsky, ibid., May 7, 1934, p. 712.
53 Knorin, ibid., March 9, 1933, p. 263.
54 Fritz Heckert (German representative to the Comintern), ibid., April 21, 1933, p. 418.
55 Resolution of the CC of CPG, ibid., June 9, 1933, p. 547.
56 Resolution of Polit-Bureau of the CC of CPG, ibid., November 3, 1933, p. 1064.
57 Theses of XIII Plenum, ibid., January 5, 1934, p. 13.
58 Kuusinen, ibid., January 30, 1934, p. 109.
59 Wilhelm Pieck (then Secretary of the German Communist party), ibid., January 30, 1934, pp. 124-25. (According to Babette L. Gross, widow of the former German Communist leader, Willy Munzenberg, who was expelled in 1938, Pieck told a personal friend in early 1933: “If the Nazis come to power, they will be at the end of their rope in two months, and then it will be our turn!” And Fritz Heckert wrote to Münzenberg in Moscow, January 1933: “The Nazis will perform no miracles and will be at the end of their rope in no time” [The Comintern—Historical Highlights, edited by Milorad M. Drachkovitch and Branko Lazitch, Praeger, for Hoover Institution Publications, 1966, p. 117]. The recollections and documents in this collection make it one of the prime sources on Comintern history.)
60 Fritz Heckert, ibid., March 19, 1934, p. 463.
61 Knorin, ibid., April 23, 1934, p. 637.
62 Pieck, ibid., May 7, 1934, p. 748.
63 Ibid., August 15, 1935, p. 902 (Pieck); August 20, 1935, p. 961 (Dimitrov); August 28, 1935, p. 1055 (Franz [Dahlem?]).
64 Ibid., August 8, 1935, p. 855 (Pieck); August 20, 1935, p. 963 (Dimitrov).
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
The Ghost of Social-Fascism
Must-Reads from Magazine
Sex and Work in an Age Without Norms
In the Beginning Was the ‘Hostile Work Environment’
In 1979, the feminist legal thinker Catharine MacKinnon published a book called Sexual Harassment of Working Women. Her goal was to convince the public (especially the courts) that harassment was a serious problem affecting all women whether or not they had been harassed, and that it was discriminatory. “The factors that explain and comprise the experience of sexual harassment characterize all women’s situation in one way or another, not only that of direct victims of the practice,” MacKinnon wrote. “It is this level of commonality that makes sexual harassment a women’s experience, not merely an experience of a series of individuals who happen to be of the female sex.” MacKinnon was not only making a case against clear-cut instances of harassment, but also arguing that the ordinary social dynamic between men and women itself created what she called “hostile work environments.”
The culture was ripe for such arguments. Bourgeois norms of sexual behavior had been eroding for at least a decade, a fact many on the left hailed as evidence of the dawn of a new age of sexual and social freedom. At the same time, however, a Redbook magazine survey published a few years before MacKinnon’s book found that nearly 90 percent of the female respondents had experienced some form of harassment on the job.
MacKinnon’s views might have been radical—she argued for a Marxist feminist jurisprudence reflecting her belief that sexual relations are hopelessly mired in male dominance and female submission—but she wasn’t entirely wrong. The postwar America in which women like MacKinnon came of age offered few opportunities for female agency, and the popular culture of the day reinforced the idea that women were all but incapable of it.
It wasn’t just the perfect housewives in the midcentury mold of Donna Reed and June Cleaver who “donned their domestic harness,” as the historian Elaine Tyler May wrote in her social history Homeward Bound. Popular magazines such as Good Housekeeping, McCall’s, and Redbook reinforced the message; so did their advertisers. A 1955 issue of Family Circle featured an advertisement for Tide detergent that depicted a woman with a rapturous expression on her face actually hugging a box of Tide under the line: “No wonder you women buy more Tide than any other washday product! Tide’s got what women want!” Other advertisements infantilized women by suggesting they were incapable of making basic decisions. “You mean a -woman can open it?” ran one for Alcoa aluminum bottle caps. It is almost impossible to read the articles or view the ads without thinking they were some kind of put-on.
The competing view of women in the postwar era was equally pernicious: the objectified pinup or sexpot. Marilyn Monroe’s hypersexualized character in The Seven Year Itch from 1955 doesn’t even have a name—she’s simply called The Girl. The 1956 film introducing the pulchritudinous Jayne Mansfield to the world was called The Girl Can’t Help It. The behavior of Rat Pack–era men has now been so airbrushed and glamorized that we’ve forgotten just how thoroughly debased their treatment of women was. Even as we thrill to Frank Sinatra’s “nice ’n’ easy” style, we overlook the classic Sinatra movie character’s enjoying an endless stream of showgirls and (barely disguised) prostitutes until forced to settle down with a killjoy ball-and-chain girlfriend. The depiction of women either as childish wives living under the protection of their husbands or brainless sirens sexually available to the first taker was undoubtedly vulgar, but it reflected a reality about the domestic arrangements of Americans after 1945 that was due for a profound revision when the 1960s came along.
And change they did, with a vengeance. The sexual revolution broke down the barriers between the sexes as the women’s-liberation movement insisted that bourgeois domesticity was a prison. The rules melted away, but attitudes don’t melt so readily; Sinatra’s ball-and-chain may have disappeared by common consent, but for a long time it seemed that the kooky sexpot of the most chauvinistic fantasy had simply become the ideal American woman. The distinction between the workplaces of the upper middle class and the singles bars where they sought companionship was pretty blurred.
Which is where MacKinnon came in—although if we look back at it, her objection seems not Marxist in orientation but almost Victorian. She described a workplace in which women were unprotected by old-fashioned social norms against adultery and general caddishness and found themselves mired in a “hostile environment.” She named the problem; it fell to the feminist movement as a whole to enshrine protections against it. They had some success. In 1986, the U.S. Supreme Court embraced elements of MacKinnon’s reasoning when it ruled unanimously in Meritor Savings Bank v. Vinson that harassment that was “sufficiently severe or pervasive” enough to create “a hostile or abusive work environment” was a violation of Title VII of the Civil Rights Act of 1964. The U.S. Equal Employment Opportunity Commission issued rules advising employers to create procedures to combat harassment, and employers followed suit by establishing sexual-harassment policies. Human-resource departments spent countless hours and many millions of dollars on sexual-harassment-awareness training for employees.
With new regulations and enforcement mechanisms, the argument went, the final, fusty traces of patriarchal, protective norms and bad behavior would be swept away in favor of rational legal rules that would ensure equal protection for women in the workplace. The culture might still objectify women, but our legal and employment systems would, in fits and starts, erect scaffolding upon which women who were harassed could seek justice.
But as the growing list of present-day harassers and predators attests—Harvey Weinstein, Louis C.K., Charlie Rose, Michael Oreskes, Glenn Thrush, Mark Halperin, John Conyers, Al Franken, Roy Moore, Matt Lauer, Garrison Keillor, et al.—the system appears to have failed the people it was meant to protect. There were searing moments that raised popular awareness about sexual harassment: (Anita Hill’s testimony about U.S. Supreme Court nominee Clarence Thomas in 1991; Senator Bob Packwood’s ouster for serial groping in 1995). There was, however, still plenty of space for men who harassed and assaulted women (and, in Kevin Spacey’s case, men) to shelter in place.
This wasn’t supposed to happen. Why did it?
Sex and Training
What makes sexual harassment so unnerving is not the harassment. It’s the sex—a subject, even a half-century into our so-called sexual revolution, about which we remain deeply confused.
The challenge going forward, now that the Hollywood honcho Weinstein and other notoriously lascivious beneficiaries of the liberation era have been removed, is how to negotiate the rules of attraction and punish predators in a culture that no longer embraces accepted norms for sexual behavior. Who sets the rules, and how do we enforce them? The self-appointed guardians of that galaxy used to be the feminist movement, but it is in no position to play that role today as it reckons not only with the gropers in its midst (Franken) but the ghosts of gropers past (Bill Clinton).
The feminist movement long ago traded MacKinnon’s radical feminism for political expedience. In 1992 and 1998, when her husband was a presidential candidate and then president, Hillary Clinton covered for Bill, enthusiastically slut-shaming his accusers. Her sin was and is at least understandable, if not excusable, given that the two are married. But what about America’s most glamorous early feminist, Gloria Steinem? In 1998, Steinem wrote of Clinton accuser Kathleen Willey: “The truth is that even if the allegations are true, the President is not guilty of sexual harassment. He is accused of having made a gross, dumb and reckless pass at a supporter during a low point in her life. She pushed him away, she said, and it never happened again. In other words, President Clinton took ‘no’ for an answer.” As for Monica Lewinsky, Steinem didn’t even consider the president’s behavior with a young intern to be harassment: “Welcome sexual behavior is about as relevant to sexual harassment as borrowing a car is to stealing one.”
The consequences of applying to Clinton what Steinem herself called the “one-free-grope” rule are only now becoming fully visible. Even in the case of a predator as malevolent as Weinstein, it’s clear that feminists no longer have a shared moral language or the credibility with which to condemn such behavior. Having tied their movement’s fortunes to political power, especially the Democratic Party, it is difficult to take seriously their injunctions about male behavior on either side of the aisle now (just as it was difficult to take seriously partisans on the right who defended the Alabama Senate candidate and credibly accused child sexual predator Roy Moore). Democrat Nancy Pelosi’s initial hemming and hawing about denouncing accused sexual harasser Representative John Conyers was disappointing but not surprising. As for Steinem, she’s gone from posing undercover as a Playboy bunny in order to expose male vice to sitting on the board of Playboy’s true heir, VICE Media, an organization whose bro-culture has spawned many sexual-harassment complaints. She’s been honored by Rutgers University, which created the Gloria Steinem Chair in Media, Culture, and Feminist Studies. One of the chair’s major endowers? Harvey Weinstein.
In place of older accepted norms or trusted moral arbiters, we have weaponized gossip. “S—-y Media Men” is a Google spreadsheet created by a woman who works in media and who, in the wake of the Weinstein revelations, wanted to encourage other women to name the gropers among us. At first a well-intentioned effort to warn women informally about men who had behaved badly, it quickly devolved into an anonymous unverified online litany of horribles devoid of context. The men named on the list were accused of everything from sending clumsy text messages to rape; Jia Tolentino of the New Yorker confessed that she didn’t believe the charges lodged against a male friend of hers who appeared on the list.
Others have found sisterhood and catharsis on social media, where, on Twitter, the phrase #MeToo quickly became the symbol for women’s shared experiences of harassment or assault. Like the consciousness-raising sessions of earlier eras, the hashtag supposedly demonstrated the strength of women supporting other women. But unlike in earlier eras, it led not to group hugs over readings of The Feminine Mystique, but to a brutally efficient form of insta-justice meted out on an almost daily basis against the accused. Writing in the Guardian, Jessica Valenti praised #MeToo for encouraging women to tell their stories but added, “Why have a list of victims when a list of perpetrators could be so much more useful?” Valenti encouraged women to start using the hashtag as a way to out predators, not merely to bond with one another. Even the New York Times has gone all-in on the assumption that the reckoning will continue: The newspaper’s “gender editor,” Jessica Bennett, launched a newsletter, The #MeToo Moment, described as “the latest news and insights on the sexual harassment and misconduct scandals roiling our society.”
As the also-popular hashtag #OpenSecret suggests, this #MeToo moment has brought with it troubling questions about who knew what and when—and a great deal of anger at gatekeepers and institutions that might have turned a blind eye to predators. The backlash against the Metropolitan Opera in New York is only the most recent example. Reports of conductor James Levine’s molestation of teenagers have evidently been widespread in the classical-music world for decades. And, as many social-media users hinted with their use of the hashtag #itscoming, Levine is not the only one who will face a reckoning.
To be sure, questioning and catharsis are welcome if they spark reforms such as crackdowns on the court-approved payoffs and nondisclosure agreements that allowed sexual predators like Weinstein to roam free for so long. And they have also brought a long-overdue recognition of the ineffectiveness of so much of what passes for sexual-harassment-prevention training in the workplace. As the law professor Lauren Edelman noted in the Washington Post, “There have been only a handful of empirical studies of sexual-harassment training, and the research has not established that such training is effective. Some studies suggest that training may in fact backfire, reinforcing gendered stereotypes that place women at a disadvantage.” One specific survey at a university found that “men who participated in the training were less likely to view coercion of a subordinate as sexual harassment, less willing to report harassment and more inclined to blame the victim than were women or men who had not gone through the training.”
Realistic Change vs. Impossible Revolution
Because harassment lies at the intersection of law, politics, ideology, and culture, attempts to re-regulate behavior, either by returning to older, more traditional norms, or by weaponizing women’s potential victimhood via Twitter, won’t work. America is throwing the book at foul old violators like Weinstein and Levine, but aside from warning future violators that they may be subject to horrible public humiliation and ruination, how is all this going to fix the problem?
We are a long way from Phyllis Schlafly’s ridiculous remark, made years ago during a U.S. Senate committee hearing, that “virtuous women are seldom accosted,” but Vice President Mike Pence’s rule about avoiding one-on-one social interactions with women who aren’t his wife doesn’t really scale up in terms of effective policy in the workplace, either. The Pence Rule, like corporate H.R. policies about sexual harassment, really exists to protect Pence from liability, not to protect women.
Indeed, the possibility of realistic change is made almost moot by the hysterical ambitions of those who believe they are on the verge of bringing down the edifice of American masculinity the way the Germans brought down the Berlin wall. Bennett of the Times spoke for many when she wrote in her description of the #MeToo newsletter: “The new conversation goes way beyond the workplace to sweep in street harassment, rape culture, and ‘toxic masculinity’—terminology that would have been confined to gender studies classes, not found in mainstream newspapers, not so long ago.”
Do women need protection? Since the rise of the feminist movement, it has been considered unacceptable to declare that women are weaker than men (even physically), yet, as many of these recent assault cases make clear, this is a plain fact. Men are, on average, physically larger and more aggressive than women; this is why for centuries social codes existed to protect women who were, by and large, less powerful, more vulnerable members of society.
MacKinnon’s definition of harassment at first seemed to acknowledge such differences; she described harassment as “dominance eroticized.” But like all good feminist theorists, she claimed this dominance was socially constructed rather than biological—“the legally relevant content of the term sex, understood as gender difference, should focus upon its social meaning more than upon any biological givens,” she wrote. As such, the reasoning went, men’s socially constructed dominance could be socially deconstructed through reeducation, training, and the like.
Culturally, this is the view that now prevails, which is why we pinball between arguing that women can do anything men can do and worrying that women are all the potential victims of predatory, toxic men. So which is it? Girl Power or the Fainting Couch?
Regardless, when harassment or assault claims arise, the cultural assumptions that feminism has successfully cultivated demand we accept that women are right and men are wrong (hence the insistence that we must believe every woman’s claim about harassment and assault, and the calling out of those who question a woman’s accusation). This gives women—who are, after all, flawed human beings just like men—too much accusatory power in situations where context is often crucial for understanding what transpired. Feminists with a historical memory should recall how they embraced this view after mandatory-arrest laws for partner violence that were passed in the 1990s netted many women for physically assaulting their partners. Many feminist legal scholars at the time argued that such laws were unfair to women precisely because they neglected context. (“By following the letter of the law… law enforcement officers often disregard the context in which victims of violence resort to using violence themselves,” wrote Susan L. Miller in the Violence Against Women journal in 2001.)
Worse, the unquestioned valorization of women’s claims leaves men in the position of being presumed guilty unless proven innocent. Consider a recent tweet by Washington Post reporter and young-adult author Monica Hesse in response to New York Times reporter Farhad Manjoo’s self-indulgent lament. Manjoo: “I am at the point where i seriously, sincerely wonder how all women don’t regard all men as monsters to be constantly feared. the real world turns out to be a legit horror movie that I inhabited and knew nothing about.”
Hesse’s answer: “Surprise! The answer is that we do, and we must, regard all men as potential monsters to be feared. That’s why we cross to the other side of the street at night, and why we sometimes obey when men say ‘Smile, honey!’ We are always aware the alternative could be death.” This isn’t hyperbole in her case; Hesse has so thoroughly internalized the message that men are to be feared, not trusted, that she thinks one might kill her on the street if she doesn’t smile at him. Such illogic makes the Victorian neurasthenics look like the Valkyrie.
But while most reasonable people agree that women and men both need to take responsibility for themselves and exercise good judgment, what this looks like in practice is not going to be perfectly fair, given the differences between men and women when it comes to sexual behavior. In her book, MacKinnon observed of sexual harassment, “Tacitly, it has been both acceptable and taboo; acceptable for men to do, taboo for women to confront, even to themselves.”
That’s one thing we can say for certain is no longer true. Nevertheless, if you begin with the assumption that every sexual invitation is a power play or the prelude to an assault, you are likely to find enemies lurking everywhere. As Hesse wrote in the Washington Post about male behavior: “It’s about the rot that we didn’t want to see, that we shoveled into the garbage disposal of America for years. Some of the rot might have once been a carrot and some it might have once been a moldy piece of rape-steak, but it’s all fetid and horrific and now, and it’s all coming up at once. How do we deal with it? Prison for everyone? Firing for some? …We’re only asking for the entire universe to change. That’s all.”
But women are part of that “entire universe,” too, and it is incumbent on them to make it clear when someone has crossed the line. Both women and men would be better served if they adopted the same rule—“If you see something, say something”—when it comes to harassment. Among the many details that emerged from the recent exposé at Vox about New York Times reporter Glenn Thrush was the setting for the supposedly egregious behavior: It was always after work and after several drinks at a bar. In all of the interactions described, one or usually both of the parties was tipsy or drunk; the women always agreed to go with Thrush to another location. The women also stayed on good terms with Thrush after he made his often-sloppy passes at them, in one case sending friendly text messages and ensuring him he didn’t need to apologize for his behavior. The Vox writer, who herself claims to have been victimized by Thrush, argues, “Thrush, just by his stature, put women in a position of feeling they had to suck up and move on from an uncomfortable encounter.” Perhaps. But he didn’t put them in the position of getting drunk after work with him. They put themselves in that position.
Also, as the Thrush story reveals, women sometimes use sexual appeal and banter for their own benefit in the workplace. If we want to clarify the blurred lines that exist around workplace relationships, then we will have to reckon with the women who have successfully exploited them for their own advantage.
None of this means women should be held responsible when men behave badly or illegally. But it puts male behavior in the proper context. Sometimes, things really are just about sex, not power. As New York Times columnist Ross Douthat bluntly noted in a recent debate in New York magazine with feminist Rebecca Traister, “I think women shouldn’t underestimate the extent to which male sexual desire is distinctive and strange and (to women) irrational-seeming. Saying ‘It’s power, not sex’ excludes too much.”
Social-Media Justice or Restorative Justice?
What do we want to happen? Do we want social-media justice or restorative justice for harassers and predators? The first is immediate, cathartic, and brutal, with little consideration for nuance or presumed innocence for the accused. The second is more painstaking because it requires reaching some kind of consensus about the allegations, but it is also ultimately less destructive of the community and culture as a whole.
Social-media justice deploys the powerful force of shame at the mere whiff of transgression, so as to create a regime of prevention. The thing is, Americans don’t really like shame (the sexual revolution taught us that). Our therapeutic age doesn’t think that suppressing emotions and inhibiting feelings—especially about sex—is “healthy.” So either we will have to embrace the instant and unreflective emotiveness of #MeToo culture and accept that its rough justice is better than no justice at all—or we will have to stop overreacting every time a man does something that is untoward—like sending a single, creepy text message—but not actually illegal (like assault or constant harassment).
After all, it’s not all bad news from the land of masculinity. Rates of sexual violence have fallen 63 percent since 1993, according to statistics from the Rape, Abuse, and Incest National Network, and as scholar Steven Pinker recently observed: “Despite recent attention, workplace sexual harassment has declined over time: from 6.1 percent of GSS [General Social Survey] respondents in 2002 to 3.6 percent in 2014. Too high, but there’s been progress, which can continue.”
Still, many men have taken this cultural moment as an opportunity to reflect on their own understanding of masculinity. In the New York Times, essayist Stephen Marche fretted about the “unexamined brutality of the male libido” and echoed Catharine MacKinnon when he asked, “How can healthy sexuality ever occur in conditions in which men and women are not equal?” He would have done better to ask how we can raise boys who will become men who behave honorably toward women. And how do we even raise boys to become honorable men in a culture that no longer recognizes and rewards honor?
The answers to those questions aren’t immediately clear. But one thing that will make answering them even harder is the promotion of the idea of “toxic masculinity.” New York Times columnist Charles Blow recently argued that “we have to re-examine our toxic, privileged, encroaching masculinity itself. And yes, that also means on some level reimagining the rules of attraction.” But the whole point of the phrase “rules of attraction” is to highlight that there aren’t any and never have been (if you have any doubts, read the 1987 Bret Easton Ellis novel that popularized the phrase). Blow’s lectures about “toxic masculinity” are meant to sow self-doubt in men and thus encourage some enlightened form of masculinity, but that won’t end sexual harassment any more than Lysistrata-style refusal by women to have sex will end war.
Parents should be teaching their sons about personal boundaries and consent from a young age, just as they teach their daughters, and unequivocally condemn raunchy and threatening remarks about women, whether they are uttered by a talk-radio host or by the president of the United States. The phrase “that isn’t how decent men behave” should be something every parent utters.
But such efforts are made more difficult by a liberal culture that has decided to equate caddish behavior with assault precisely because it has rejected the strict norms that used to hold sway—the old conservative norms that regarded any transgression against them as a seriousviolation and punished it accordingly. Instead, in an effort to be a kinder, gentler, more “woke” society that’s understanding of everyone’s differences, we’ve ended up arbitrarily picking and choosing among the various forms of questionable behavior for which we will have no tolerance, all the while failing to come to terms with the costs of living in such a society. A culture that hangs the accused first and asks questions later might have its virtues, but psychological understanding is not one of them.
And so we come back to sex and our muddled understanding of its place in society. Is it a meaningless pleasure you’re supposed to enjoy with as many people as possible before settling down and marrying? Or is it something more important than that? Is it something that you feel empowered to handle in Riot Grrrl fashion, or is getting groped once by a pervy co-worker something that prompts decades of nightmares and declarations that you will “never be the same”? How can we condemn people like Senator Al Franken, whose implicit self-defense is that it’s no big deal to cop a feel every so often, when our culture constantly offers up women like comedian Amy Schumer or Abbi and Ilana of the sketch show Broad City, who argue that women can and should be as filthy and degenerate as the most degenerate guy?
Perhaps it’s progress that the downfall of powerful men who engage in inappropriate sexual behavior is no longer called a “bimbo eruption,” as it was in the days of Bill Clinton, and that the men who harassed or assaulted women are facing the end of their careers and, in some cases, prison. But this is not the great awakening that so many observers have claimed it is. Awakenings need tent preachers to inspire and eager audiences to participate; our #MeToo moment has plenty of those. What it doesn’t have, unless we can agree on new norms for sexual behavior both inside and outside the workplace, is a functional theology that might cultivate believers who will actually practice what they preach.
That functional theology is out of our reach. Which means this moment is just that—a moment. It will die down, impossible though it seems at present. And every 10 or 15 years a new harassment scandal will spark widespread outrage, and we will declare that a new moment of reckoning and realization has emerged. After which the stories will again die down and very little will have changed.
No one wants to admit this. It’s much more satisfying to see the felling of so many powerful men as a tectonic cultural shift, another great leap forward toward equality between the sexes. But it isn’t, because the kind of asexual equality between the genders imagined by those most eager to celebrate our #MeToo moment has never been one most people embrace. It’s one that willfully overlooks significant differences between the sexes and assumes that thoughtful people can still agree on norms of sexual behavior.
They can’t. And they won’t.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
The U.S. will endanger itself if it accedes to Russian and Chinese efforts to change the international system to their liking
A “sphere of influence” is traditionally understood as a geographical zone within which the most powerful actor can impose its will. And nearly three decades after the close of the superpower struggle that Churchill’s speech heralded, spheres of influence are back. At both ends of the Eurasian landmass, the authoritarian regimes in China and Russia are carving out areas of privileged influence—geographic buffer zones in which they exercise diplomatic, economic, and military primacy. China and Russia are seeking to coerce and overawe their neighbors. They are endeavoring to weaken the international rules and norms—and the influence of opposing powers—that stand athwart their ambitions in their respective “near abroads.” Chinese island-building and maritime expansionism in the South China Sea and Russian aggression in Ukraine and intimidation of the Baltic states are part and parcel of the quasi-imperial projects these revisionist regional powers are now pursuing.
Historically speaking, a world made up of rival spheres is more the norm than the exception. Yet such a world is in sharp tension with many of the key tenets of the American foreign-policy tradition—and with the international order that the United States has labored to construct and maintain since the end of World War II.
To be sure, Washington carved out its own spheres of influence in the Western Hemisphere beginning in the 19th century, and America’s myriad alliance blocs in key overseas regions are effectively spheres by another name. And today, some international-relations observers have welcomed the return of what the foreign-policy analyst Michael Lind has recently called “blocpolitik,” hoping that it might lead to a more peaceful age of multilateral equilibrium.
But for more than two centuries, American leaders have generally opposed the idea of a world divided into rival spheres of influence and have worked hard to deny other powers their own. And a reversion to a world dominated by great powers and their spheres of influence would thus undo some of the strongest traditions in American foreign policy and take the international system back to a darker, more dangerous era.I n an extreme form, a sphere of influence can take the shape of direct imperial or colonial control. Yet there are also versions in which a leading power forgoes direct military or administrative domination of its neighbors but nonetheless exerts geopolitical, economic, and ideological influence. Whatever their form, spheres of influence reflect two dominant imperatives of great-power politics in an anarchic world: the need for security vis-à-vis rival powers and the desire to shape a nation’s immediate environment to its benefit. Indeed, great powers have throughout history pursued spheres of influence to provide a buffer against the encroachment of other hostile actors and to foster the conditions conducive to their own security and well-being.
The Persian Empire, Athens and Sparta, and Rome all carved out domains of dominance. The Chinese tribute system—which combined geopolitical control with the spread of Chinese norms and ideas—profoundly shaped the trajectory of East Asia for hundreds of years. The 19th and 20th centuries saw the British Empire, Japan’s East Asian Co-Prosperity Sphere, and the Soviet bloc.
America, too, has played the spheres-of-influence game. From the early-19th century onward, American officials strove for preeminence in the Western Hemisphere—first by running other European powers off much of the North American continent and then by pushing them out of Latin America. With the Monroe Doctrine, first enunciated in 1823, America staked its claim to geopolitical primacy from Canada to the Southern Cone. Over the succeeding generations, Washington worked to achieve military dominance in that area, to tie the countries of the Western Hemisphere to America geopolitically and economically, and even to help pick the rulers of countries from Mexico to Brazil.
If this wasn’t a sphere of influence, nothing was. In 1895, Secretary of State Richard Olney declared that “the United States is practically sovereign on this continent and its fiat is law upon the subjects to which it confines its interposition.” After World War II, moreover, a globally predominant United States steadily expanded its influence into Europe through NATO, into East Asia through various military alliances, and into the Middle East through a web of defense, diplomatic, and political arrangements. The story of global politics over the past 200 years has, in large part, been the story of expanding U.S. influence.
Nonetheless, there has always been something ambivalent—critics would say hypocritical—about American views of this matter. For as energetic as Washington has been in constructing its geopolitical domain, a “spheres-of-influence world” is in perpetual tension with four strong intellectual traditions in U.S. strategy. These are hegemony, liberty, openness, and exceptionalism.
First, hegemony. The myth of America as an innocent isolationist country during its first 170 years is powerful and enduring; it’s also wrong. From the outset, American statesmen understood that the country’s favorable geography, expanding population, and enviable resource endowments gave it the potential to rival, and ultimately overtake, the European states that dominated world politics. America might be a fledgling republic, George Washington said, but it would one day attain “the strength of a giant.” From the revolution onward, American officials worried, with good reason, that France, Spain, and the United Kingdom would use their North American territories to strangle or contain the young republic. Much of early American diplomacy was therefore geared toward depriving the European powers of their North American possessions, using measures from coercive diplomacy to outright wars of conquest. “The world shall have to be familiarized with the idea of considering our proper dominion to be the continent of North America,” wrote John Quincy Adams in 1819. The only regional sphere of influence that Americans would accept as legitimate was their own.
By the late-19th century, the same considerations were pushing Americans to target spheres of influence further abroad. As the industrial revolution progressed, it became clear that geography alone might not protect the nation. Aggressive powers could now generate sufficient military strength to dominate large swaths of Europe or East Asia and then harness the accumulated resources to threaten the United States. Moreover, as America itself became an increasingly mighty country that sought to project its influence overseas, its leaders naturally objected to its rivals’ efforts to establish their own preserves from which Washington would be excluded. If much of America’s 19th-century diplomacy was dedicated to denying other powers spheres of influence in the Western Hemisphere, much of the country’s 20th-century diplomacy was an effort to break up or deny rival spheres of influence in Europe and East Asia.
From the Open Door policy, which sought to prevent imperial powers from carving up China, to U.S. intervention in the world wars, to the confrontation with the Soviet Empire in the Cold War, the United States repeatedly acted on the belief that it could be neither as secure nor influential as it desired in a world divided up and dominated by rival nations. The American geopolitical tradition, in other words, has long contained a built-in hostility to other countries’ spheres of influence.
The American ideological tradition shares this sense of preeminence, as reflected in the second key tenet: liberty. America’s founding generation did not see the revolution merely as the birth of a future superpower; they saw it as a catalyst for spreading political liberty far and wide. Thomas Paine proclaimed in 1775 that Americans could “begin the world anew”; John Quincy Adams predicted, several decades later, that America’s liberal ideology was “destined to cover the surface of the globe.” Here, too, the new nation was not cursed with excessive modesty—and here, too, the existence of rival spheres of influence threatened this ambition.
Rival spheres of influence—particularly within the Western Hemisphere—imperiled the survival of liberty at home. If the United States were merely one great power among many on the North American continent, the founding generation worried, it would be forced to maintain a large standing military establishment and erect a sort of 18th-century “garrison state.” Living in perpetual conflict and vigilance, in turn, would corrode the very freedoms for which the revolution had been fought. “No nation,” wrote James Madison, “can preserve its freedom in the midst of continual warfare.” Just as Madison argued, in Federalist No. 10, that “extending the sphere”—expanding the republic—was a way of safeguarding republicanism at home, expanding America’s geopolitical domain was essential to providing the external security that a liberal polity required to survive.
Rival spheres of influence also constrained the prospects for liberty abroad. Although the question of whether the United States should actively support democratic revolutions overseas has been a source of unending controversy, virtually all American strategists have agreed that the country would be more secure and influential in a world where democracy was widespread. Given this mindset, Americans could hardly be desirous of foreign powers—particularly authoritarian powers—establishing formidable spheres of influence that would allow them to dominate the international system or suppress liberal ideals. The Monroe Doctrine was a response to the geopolitical dangers inherent in renewed imperial control of South America; it was also a response to the ideological danger posed by European nations that would “extend the political system to any portion” of the Western Hemisphere. Similar concerns have been at the heart of American opposition to the British Empire and the Soviet bloc.
Economic openness, the third core dynamic of American policy, has long served as a commercial counterpart to America’s ideological proselytism. Influenced as much by Adam Smith as by Alexander Hamilton, early American statecraft promoted free trade, neutral rights, and open markets, both to safeguard liberty and enrich a growing nation. This mission has depended on access to the world’s seas and markets. When that access was circumscribed—by the British in 1812 and by the Germans in 1917—Americans went to war to preserve it. It is unsurprising, then, that Americans also looked askance at efforts by other powers to establish areas that might be walled off from U.S. trade and investment—and from the spread of America’s capitalist ideology.
A brief list of robust policy endeavors underscores the persistent U.S. hostility to an economically closed, spheres-of-influence world: the Model Treaty of 1776, designed to promote free and reciprocal trade; John Hay’s Open Door policy of 1899, designed to prevent any outside power from dominating trade with China; Woodrow Wilson’s advocacy in his “14 Points” speech of 1918 for the removal “of all economic barriers and the establishment of an equality of trade conditions among all nations”; and the focus of the 1941 Atlantic Charter on reducing trade restrictions while promoting international economic cooperation (assuming the allies would emerge triumphant from World War II).
Fourth and finally, there’s exceptionalism. Americans have long believed that their nation was created not simply to replicate the practices of the Old World, but to revolutionize how states and peoples interact with one another. The United States, in this view, was not merely another great power out for its own self-interest. It was a country that, by virtue of its republican ideals, stood for the advancement of universal rights, and one that rejected the back-alley methods of monarchical diplomacy in favor of a more principled statecraft. When Abraham Lincoln said America represented “the last best hope of earth,” or when Woodrow Wilson scorned secret agreements in favor of “open covenants arrived at openly,” they demonstrated this exceptionalist strain in American thinking. There is some hypocrisy here, of course, for the United States has often acted in precisely the self-interested, cutthroat manner its statesmen deplored. Nonetheless, American exceptionalism has had a pronounced effect on American conduct.
Compare how Washington led its Western European allies during the Cold War—the extent to which NATO rested on the authentic consent of its members, the way the United States consistently sought to empower rather than dominate its partners—with how Moscow managed its empire in Eastern Europe. In the same way, Americans have often recoiled from arrangements that reeked of the old diplomacy. Franklin Roosevelt might have tolerated a Soviet-dominated Eastern Europe after World War II, for instance, but he knew he could not admit this publicly. Likewise, the Helsinki Accords of 1975, which required Washington to acknowledge the diplomatic legitimacy of the Soviet sphere, proved controversial inside the United States because they seemed to represent just the sort of cynical, old-school geopolitics that American exceptionalism abhors.
To be clear, U.S. hostility to a spheres-of-influence world has always been leavened with a dose of pragmatism; American leaders have pursued that hostility only so far as power and prudence allowed. The Monroe Doctrine warned European powers to stay out of the Americas, but the quid pro quo was that a young and relatively weak United States would accept, for a time, a sphere of monarchical dominance within Europe. Even during the Cold War, U.S. policymakers generally accepted that Washington could not break up the Soviet bloc in Eastern Europe without risking nuclear war.
But these were concessions to expediency. As America gained greater global power, it more actively resisted the acquisition or preservation of spheres by others. From gradually pushing the Old World out of the New, to helping vanquish the German and Japanese Empires by force of arms, to assisting the liquidation of the British Empire after World War II, to containing and ultimately defeating the Soviet bloc, the United States was present at the destruction of spheres of influence possessed by adversaries and allies alike.
The acme of this project came in the quarter-century that followed the Cold War. With the collapse of the Warsaw Pact and the Soviet Union itself, it was possible to envision a world in which what Thomas Jefferson called America’s “empire of liberty” could attain global dimensions, and traditional spheres of influence would be consigned to history. The goal, as George W. Bush’s 2002 National Security Strategy proclaimed, was to “create a balance of power that favors human freedom.” This meant an international environment in which the United States and its values were dominant and there was no balance of power whatsoever.
Under presidents from George H.W. Bush to Barack Obama, this project entailed working to spread democracy and economic liberalism farther than ever before. It involved pushing American influence and U.S.-led institutions into regions—such as Eastern Europe—that were previously dominated by other powers. It meant maintaining the military primacy necessary to stop regional powers from establishing new spheres of influence, as Washington did by rolling back Saddam Hussein’s conquest of Kuwait in 1990 and by deterring China from coercing Taiwan in 1995–96. Not least, this American project involved seeking to integrate potential rivals—foremost Russia and China—into the post–Cold War order, in hopes of depriving them of even the desire to challenge it. This multifaceted effort reflected the optimism of the post-Cold War era, as well as the influence of tendencies with deep roots in the American past. Yet try as Washington might to permanently leave behind a spheres-of-influence world, that prospect is once again upon us.B egin with China’s actions in the Asia-Pacific region. The sources of Chinese conduct are diverse, ranging from domestic insecurity to the country’s confidence as a rising power to its sense of historical destiny as “the Middle Kingdom.” All these influences animate China’s bid to establish regional mastery. China is working, first, to create a power vacuum by driving the United States out of the Western Pacific, and second, to fill that vacuum with its own influence. A Chinese admiral made this ambition clear when he remarked—supposedly in jest—to an American counterpart that, in the future, the two powers should simply split the Pacific with Hawaii as the dividing line. Yang Jiechi, then China’s foreign minister, echoed this sentiment in a moment of frustration by lecturing the nations of Southeast Asia. “China is a big country,” he said, “and other countries are small countries, and that’s just a fact.”
Policy has followed rhetoric. To undercut America’s position, Beijing has harassed American ships and planes operating in international waters and airspace. The Chinese have warned U.S. allies they may be caught in the crossfire of a Sino-American war unless Washington accommodates China or the allies cut loose from the United States. China has simultaneously worked to undermine the credibility of U.S. alliance guarantees by using strategies designed to shift the regional status quo in ways even the mighty U.S. Navy finds difficult to counter. Through a mixture of economic aid and diplomatic coercion, Beijing has also successfully divided international bodies, such as the Association of Southeast Asian Nations, through which the United States has sought to rally opposition to Chinese assertiveness. And in the background, China has been steadily building, over the course of more than two decades, formidable military tools designed to keep the United States out of the region and give Beijing a free hand in dealing with its weaker neighbors. As America’s sun sets in the Asia-Pacific, Chinese leaders calculate, the shadow China casts over the region will only grow longer.
To that end, China has claimed, dubiously, nearly all of the South China Sea as its own and constructed artificial islands as staging points for the projection of military power. Military and paramilitary forces have teased, confronted, and violated the sovereignty of countries from Vietnam to the Philippines; China is likewise intensifying the pressure on Japan in the East China Sea. Economically, Beijing uses its muscle to reward those who comply with China’s policies and punish those not willing to bow to its demands. It is simultaneously advancing geoeconomic projects, such as the Belt and Road Initiative, Asian Infrastructure Investment Bank, and Regional Comprehensive Economic Project (RCEP) that are designed to bring the region into its orbit.
Strikingly, China has also moved away from its long-professed principle of noninterference in other countries’ domestic politics by extending the reach of Chinese propaganda organs and using investment and even bribery to co-opt regional elites. Payoffs to Australian politicians are as critical to China’s regional project as development of “carrier-killer” missiles. Finally, far from subscribing to liberal concepts of democracy and human rights, Beijing emphasizes its rejection of these values and its desire to create “Asia for Asians.” In sum, China is pursuing a classic spheres-of-influence project. By blending intimidation with inducement, Beijing aims to sunder its neighbors’ bonds with America and force them to accept a Sino-centric order—a new Chinese tribute system for the 21st century.A t the other end of Eurasia, Russia is playing geopolitical hardball of a different sort. The idea that Moscow should dominate its “near abroad” is as natural to many Russians as American regional primacy is to Americans. The loss of the Kremlin’s traditional buffer zone was, therefore, one of the most painful legacies of the Cold War’s end. And so it is hardly surprising that, as Russia has regained a degree of strength in recent years, it has sought to reassert its supremacy.
It has done so, in fact, through more overtly aggressive means than those employed by China. Moscow has twice seized opportunities to humiliate and dismember former Soviet republics that committed the sin of tilting toward the West or throwing out pro-Russian leaders, first in Georgia in 2008 and then in Ukraine in 2014. It has regularly reminded its neighbors that they live on Russia’s doorstep, through coercive activities such as conducting cyberattacks on Estonia in 2007 and holding aggressive military exercises on the frontiers of the Baltic states. In the same vein, the Kremlin has essentially claimed a veto over the geopolitical alignments of neighbors from the Caucasus to Scandinavia, whether by creating frozen conflicts on their territory or threatening to target them militarily—perhaps with nuclear weapons—should they join NATO.
Military muscle is not Moscow’s only tool. Russia has simultaneously used energy exports to keep the states on its periphery economically dependent, and it has exported corruption and illiberalism to non-aligned states in the former Warsaw Pact area to prevent further encroachment of liberal values. Not least, the Kremlin has worked to undermine NATO and the European Union through political subversion and intervention in Western electoral processes. And while Russia’s activities are most concentrated in Eastern Europe and Central Asia, it’s also projecting its influence farther afield. Russian forces intervened successfully in Syria in 2015 to prop up Bashar al-Assad, preserve access to warm-water ports on the Mediterranean, and demonstrate the improved accuracy and lethality of Russian arms. Moscow continues to make inroads in the Middle East, often in cooperation with another American adversary: Iran.
To be sure, the projects that China and Russia are pursuing today are vastly different from each other, but the core logic is indisputably the same. Authoritarian powers are re-staking their claim to privileged influence in key geostrategic areas.S o what does this mean for American interests? Some observers have argued that the United States should make a virtue of necessity and accept the return of such arrangements. By this logic, spheres of influence create buffer zones between contending great powers; they diffuse responsibility for enforcing order in key areas. Indeed, for those who think that U.S. policy has left the country exhausted and overextended, a return to a world in which America no longer has the burden of being the dominant power in every region may seem attractive. The great sin of American policy after the Cold War, many realist scholars argue, was the failure to recognize that even a weakened Russia would demand privileged influence along its frontiers and thus be unalterably opposed to NATO expansion. Similarly, they lament the failure to understand that China would not forever tolerate U.S. dominance along its own periphery. It is not surprising, then, to hear analysts such as Australia’s Hugh White or America’s John Mearsheimer argue that the United States should learn to “share power” with China in the Pacific, or that it must yield ground in Eastern Europe in order to avoid war with Russia.
Such claims are not meritless; there are instances in which spheres of influence led to a degree of stability. The division of Europe into rival blocs fostered an ugly sort of stasis during the Cold War; closer to home, America’s dominance in the Western Hemisphere has long muted geopolitical competition in our own neighborhood. For all the problems associated with European empires, they often partially succeeded in limiting scourges such as communal violence.
And yet the allure of a spheres-of-influence world is largely an illusion, for such a world would threaten U.S. interests, traditions, and values in several ways.
First, basic human rights and democratic values would be less respected. China and Russia are not liberal democracies; they are illiberal autocracies that see the spread of democratic values as profoundly corrosive to their own authority and security. Just as the United States has long sought to create a world congenial to its own ideological predilections, Beijing and Moscow would certainly do likewise within their spheres of dominance.
They would, presumably, bring their influence to bear in support of friendly authoritarian regimes. And they would surely undermine democratic governments seen to pose a threat of ideological contagion or insubordination to Russian or Chinese prerogatives. Russia has taken steps to prevent the emergence of a Western-facing democracy in Ukraine and to undermine liberal democracies in Europe and elsewhere; China is snuffing out political freedoms in Hong Kong. Such actions offer a preview of what we will see when these countries are indisputably dominant along their peripheries. Further aggressions, in turn, would not simply be offensive to America’s ideological sensibilities. For given that the spread of democracy has been central to the absence of major interstate war in recent decades, and that the spread of American values has made the U.S. more secure and influential, a less democratic world will also be a more dangerous world.
Second, a spheres-of-influence world would be less open to American commerce and investment. After all, the United States itself saw geoeconomic dominance in Latin America as the necessary counterpart to geopolitical dominance. Why would China take a less self-interested approach? China already reaps the advantages of an open global economy even as it embraces protectionism and mercantilism. In a Chinese-dominated East Asia, all economic roads will surely lead to Beijing, as Chinese officials will be able to use their leverage to ensure that trade and investment flows are oriented toward China and geopolitical competitors like the United States are left on the outside. Beijing’s current geoeconomic projects—namely, RCEP and the Belt and Road Initiative—offer insight into a regional economic future in which flows of commerce and investment are subject to heavy Chinese influence.
Third, as spheres of influence reemerge, the United States will be less able to shape critical geopolitical events in crucial regions. The reason Washington has long taken an interest in events in faraway places is that East Asia, Europe, and the Middle East are the areas from which major security challenges have emerged in the past. Since World War II, America’s forward military presence has been intended to suppress incipient threats and instability; that presence has gone hand in glove with energetic diplomacy that amplifies America’s voice and protects U.S. interests. In a spheres-of-influence world, Washington would no longer enjoy the ability to act with decisive effect in these regions; it would find itself reacting to global events rather than molding them.
This leads to a final, and crucial, issue. America would be more likely to find its core security interests challenged because world orders based on rival spheres of influence have rarely been as peaceful and settled as one might imagine.
To see this, just work backward from the present. During the Cold War, a bipolar balance did help avert actual war between Moscow and Washington. But even in Europe—where the spheres of influence were best defined—there were continual tensions and crises as Moscow tested the Western bloc. And outside Europe, violence and proxy wars were common as the superpowers competed to extend their reach into the Third World. In the 1930s, the emergence of German and Japanese spheres of influence led to the most catastrophic war in global history. The empires of the 19th century—spheres of influence in their own right—continually jostled one another, leading to wars and near-wars over the course of decades; the Peace of Amiens between England and Napoleonic France lasted a mere 14 months. And looking back to the ancient world, there were not one, but three Punic Wars fought between Rome and Carthage as two expanding empires came into conflict. A world defined by spheres of influence is often a world characterized by tensions, wars, and competition.
The reasons for this are simple. As the political scientist William Wohlforth observed, unipolar systems—such as the U.S.-dominated post–Cold War order—are anchored by a hegemonic power that can act decisively to maintain the peace. In a unipolar system, Wohlforth writes, there are few incentives for revisionist powers to incur the “focused enmity” of the leading state. Truly multipolar systems, by contrast, have often been volatile. When the major powers are more evenly matched, there is a greater temptation to aggression by those who seek to change the existing order of things. And seek to change things they undoubtedly will.
The idea that spheres of influence are stabilizing holds only if one assumes that the major powers are motivated only by insecurity and that concessions to the revisionists will therefore lead to peace. Churchill described this as the idea that if one “feeds the crocodile enough, the crocodile will eat him last.”
Unfortunately, today’s rising or resurgent powers are also motivated—as is America—by honor, ambition, and the timeless desire to make their international habitats reflect their own interests and ideals. It is a risky gamble indeed, then, to think that ceding Russia or China an uncontested sphere of influence would turn a revisionist authoritarian regime into a satisfied power. The result, as Robert Kagan has noted, might be to embolden those actors all the more, by giving them freer rein to bring their near-abroads under control, greater latitude and resources to pursue their ambitions, and enhanced confidence that the U.S.-led order is fracturing at its foundations. For China, dominance over the first island chain might simply intensify desires to achieve primacy in the second island chain and beyond; for Russia, renewed mastery in the former Soviet space could lead to desires to bring parts of the former Warsaw Pact to heel, as well. To observe how China is developing ever longer-range anti-access/area denial capabilities, or how Russia has been projecting military power ever farther afield, is to see this process in action.T he reemergence of a spheres-of-influence world would thus undercut one of the great historical achievements of U.S. foreign policy: the creation of a system in which America is the dominant power in each major geopolitical region and can act decisively to shape events and protect its interests. It would foster an environment in which democratic values are less prominent, authoritarian models are ascendant, and mercantilism advances as economic openness recedes. And rather than leading to multipolar stability, this change could simply encourage greater revisionism on the part of powers whose appetite grows with the eating. This would lead the world away from the relative stability of the post–Cold War era and back into the darker environment it seemed to have relegated to history a quarter-century ago. The phrase “spheres of influence” may sound vaguely theoretical and benign, but its real-world effects are likely to be tangible and pernicious.
Fortunately, the return of a spheres-of-influence world is not yet inevitable. Even as some nations will accept incorporation into a Chinese or Russian sphere of influence as the price of avoiding conflict, or maintaining access to critical markets and resources, others will resist because they see their own well-being as dependent on the preservation of the world order that Washington has long worked to create. The Philippines and Cambodia seem increasingly to fall into the former group; Poland and Japan, among many others, make up the latter. The willingness of even this latter group to take actions that risk incurring Beijing and Moscow’s wrath, however, will be constantly calibrated against an assessment of America’s own ability to continue leading the resistance to a spheres-of-influence world. Averting that outcome is becoming steadily harder, as the relative power and ambition of America’s authoritarian rivals rise and U.S. leadership seems to falter.
Harder, but not impossible. The United States and its allies still command a significant preponderance of global wealth and power. And the political, economic, and military weaknesses of its challengers are legion. It is far from fated, then, that the Western Pacific and Eastern Europe will slip into China’s and Russia’s respective orbits. With sufficient creativity and determination, Washington and its partners might still be able to resist the return of a dangerous global system. Doing so will require difficult policy work in the military, economic, and diplomatic realms. But ideas precede policy, and so simply rediscovering the venerable tradition of American hostility to spheres of influence—and no less, the powerful logic on which that tradition is based—would be a good start.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
What does the man with the baton actually do?
Why, then, are virtually all modern professional orchestras led by well-paid conductors instead of performing on their own? It’s an interesting question. After all, while many celebrity conductors are highly trained and knowledgeable, there have been others, some of them legendary, whose musical abilities were and are far more limited. It was no secret in the world of classical music that Serge Koussevitzky, the music director of the Boston Symphony from 1924 to 1949, found it difficult to read full orchestral scores and sometimes learned how to lead them in public by first practicing with a pair of rehearsal pianists whom he “conducted” in private.
Yet recordings show that Koussevitzky’s interpretations of such complicated pieces of music as Aaron Copland’s El Salón México and Maurice Ravel’s orchestral transcription of Mussorgsky’s Pictures at an Exhibition (both of which he premiered and championed) were immensely characterful and distinctive. What made them so? Was it the virtuosic playing of the Boston Symphony alone? Or did Koussevitzky also bring something special to these performances—and if so, what was it?
Part of what makes this question so tricky to answer is that scarcely any well-known conductors have spoken or written in detail about what they do. Only two conductors of the first rank, Thomas Beecham and Bruno Walter, have left behind full-length autobiographies, and neither one features a discussion of its author’s technical methods. For this reason, the publication of John Mauceri’s Maestros and Their Music: The Art and Alchemy of Conducting will be of special interest to those who, like my friend, wonder exactly what it is that conductors contribute to the performances that they lead.1
An impeccable musical journeyman best known for his lively performances of film music with the Hollywood Bowl Orchestra, Mauceri has led most of the world’s top orchestras. He writes illuminatingly about his work in Maestros and Their Music, leavening his discussions of such matters as the foibles of opera directors and music critics with sharply pointed, sometimes gossipy anecdotes. Most interesting of all, though, are the chapters in which he talks about what conductors do on the podium. To read Maestros and Their Music is to come away with a much clearer understanding of what its author calls the “strange and lawless world” of conducting—and to understand how conductors whose technique is deficient to the point of seeming incompetence can still give exciting performances.P rior to the 19th century, conductors of the modern kind did not exist. Orchestras were smaller then—most of the ensembles that performed Mozart’s symphonies and operas contained anywhere from two to three dozen players—and their concerts were “conducted” either by the leader of the first violins or by the orchestra’s keyboard player.
As orchestras grew larger in response to the increasing complexity of 19th-century music, however, it became necessary for a full-time conductor both to rehearse them and to control their public performances, normally by standing on a podium placed in front of the musicians and beating time in the air with a baton. Most of the first men to do so were composers, including Hector Berlioz, Felix Mendelssohn, and Richard Wagner. By the end of the century, however, it was becoming increasingly common for musicians to specialize in conducting, and some of them, notably Arthur Nikisch and Arturo Toscanini, came to be regarded as virtuosos in their own right. Since then, only three important composers—Benjamin Britten, Leonard Bernstein, and Pierre Boulez—have also pursued parallel careers as world-class conductors. Every other major conductor of the 20th century was a specialist.
What did these men do in front of an orchestra? Mauceri’s description of the basic physical process of conducting is admirably straightforward:
The right hand beats time; that is, it sets the tempo or pulse of the music. It can hold a baton. The left hand turns pages [in the orchestral score], cues instrumentalists with an invitational or pointing gesture, and generally indicates the quality of the notes (percussive, smoothly linked, sustained, etc.).
Beyond these elements, though, all bets are off. Most of the major conductors of the 20th century were filmed in performance, and what one sees in these films is so widely varied that it is impossible to generalize about what constitutes a good conducting technique.2 Most of them used batons, but several, including Boulez and Leopold Stokowski, conducted with their bare hands. Bernstein and Beecham gestured extravagantly, even wildly, while others, most famously Fritz Reiner, restricted themselves to tightly controlled hand movements. Toscanini beat time in a flowing, beautifully expressive way that made his musical intentions self-evident, but Wilhelm Furtwängler and Herbert von Karajan often conducted so unclearly that it is hard to see how the orchestras they led were able to follow them. (One exasperated member of the London Philharmonic claimed, partly in jest, that Furtwängler’s baton signaled the start of a piece “only after the thirteenth preliminary wiggle.”) Conductors of the Furtwängler sort tend to be at their best in front of orchestras with which they have worked for many years and whose members have learned from experience to “speak” their gestural language fluently.
Nevertheless, all of these men were pursuing the same musical goals. Beyond stopping and starting a given piece, it is the job of a conductor to decide how it will be interpreted. How loud should the middle section of the first movement be—and ought the violins to be playing a bit softer so as not to drown out the flutes? Someone must answer questions such as these if a performance is not to sound indecisive or chaotic, and it is far easier for one person to do so than for 100 people to vote on each decision.
Above all, a conductor controls the tempo of a performance, varying it from moment to moment as he sees fit. It is impossible for a full-sized symphony orchestra to play a piece with any degree of rhythmic flexibility unless a conductor is controlling the performance from the podium. Bernstein put it well when he observed in a 1955 TV special that “the conductor is a kind of sculptor whose element is time instead of marble.” These “sculptural” decisions are subjective, since traditional musical notation cannot be matched with exactitude. As Mauceri reminds us, Toscanini and Beecham both recorded La Bohème, having previously discussed their interpretations with Giacomo Puccini, the opera’s composer, and Toscanini conducted its 1896 premiere. Yet Beecham’s performance is 14 minutes longer than Toscanini’s. Who is “right”? It is purely a matter of individual taste, since both interpretations are powerfully persuasive.
Beyond the not-so-basic task of setting, maintaining, and varying tempos, it is the job of a conductor to inspire an orchestra—to make its members play with a charged precision that transcends mere unanimity. The first step in doing so is to persuade the players of his musical competence. If he cannot run a rehearsal efficiently, they will soon grow bored and lose interest; if he does not know the score in detail, they will not take him seriously. This requires extensive preparation on the part of the conductor, and an orchestra can tell within seconds of the downbeat whether he is adequately prepared—a fact that every conductor knows. “I’m extremely humble about whatever gifts I may have, but I am not modest about the work I do,” Bernstein once told an interviewer. “I work extremely hard and all the time.”
All things being equal, it is better than not for a conductor to have a clear technique, if only because it simplifies and streamlines the process of rehearsing an orchestra. Fritz Reiner, who taught Bernstein among others, did not exaggerate when he claimed that he and his pupils could “stand up [in front of] an orchestra they have never seen before and conduct correctly a new piece at first sight without verbal explanation and by means only of manual technique.”
While orchestra players prefer this kind of conducting, a conductor need not have a technique as fully developed as that of a Reiner or Bernstein if he knows how to rehearse effectively. Given sufficient rehearsal time, decisive and unambiguous verbal instructions will produce the same results as a virtuoso stick technique. This was how Willem Mengelberg and George Szell distinguished themselves on the podium. Their techniques were no better than adequate, but they rehearsed so meticulously that their performances were always brilliant and exact.
It also helps to supply the members of the orchestra with carefully marked orchestra parts. Beecham’s manual technique was notoriously messy, but he marked his musical intentions into each player’s part so clearly and precisely that simply reading the music on the stand would produce most of the effects that he desired.
What players do not like is to be lectured. They want to be told what to do and, if absolutely necessary, how to do it, at which point the wise conductor will stop talking and start conducting. Mauceri recalls the advice given to a group of student conductors by Joseph Silverstein, the concertmaster of the Boston Symphony: “Don’t talk to us about blue skies. Just tell us ‘longer-shorter,’ ‘faster-slower,’ ‘higher-lower.’” Professional musicians cannot abide flowery speeches about the inner meaning of a piece of music, though they will readily respond to a well-turned metaphor. Mauceri makes this point with a Toscanini anecdote:
One of Toscanini’s musicians told me of a moment in a rehearsal when the sound the NBC Symphony was giving him was too heavy. … In this case, without saying a word, he reached into his pocket and took out his silk handkerchief, tossed it into the air, and everyone watched it slowly glide to earth. After seeing that, the orchestra played the same passage exactly as Toscanini wanted.
Conducting, like all acts of leadership, is in large part a function of character. The violinist Carl Flesch went so far as to call it “the only musical activity in which a dash of charlatanism is not only harmless, but positively necessary.” While that is putting it too cynically, Flesch was on to something. I did a fair amount of conducting in college, but even though I practiced endlessly in front of a mirror and spent hours poring over my scores, I lacked the personal magnetism without which no conductor can hope to be more than merely competent at best.
On the other hand, a talented musician with a sufficiently compelling personality can turn himself into a conductor more or less overnight. Toscanini had never conducted an orchestra before making his unrehearsed debut in a performance of Verdi’s Aida at the age of 19, yet the players hastened to do his musical bidding. I once saw the modern-dance choreographer Mark Morris, whose knowledge of classical music is profound, lead a chorus and orchestra in the score to Gloria, a dance he had made in 1981 to a piece by Vivaldi. It was no stunt: Morris used a baton and a score and controlled the performance with the assurance of a seasoned pro. Not only did he have a strong personality, but he had also done his musical homework, and he knew that one was as important as the other.
The reverse, however, is no less true: The success of conductors like Serge Koussevitzky is at least as much a function of their personalities as of their preparation. To be sure, Koussevitzky had been an instrumental virtuoso (he played the double bass) before taking up conducting, but everyone who worked with him in later years was aware of his musical limitations. Yet he was still capable of imposing his larger-than-life personality on players who might well have responded indifferently to his conducting had he been less charismatic. Leopold Stokowski functioned in much the same way. He was widely thought by his peers to have been far more a showman than an artist, to the point that Toscanini contemptuously dismissed him as a “clown.” But he had, like Koussevitzky, a richly romantic musical imagination coupled with the showmanship of a stage actor, and so the orchestras that he led, however skeptical they might be about his musical seriousness, did whatever he wanted.
All great conductors share this same ability to impose their will on an orchestra—and that, after all, is the heart of the matter. A conductor can be effective only if the orchestra does what he wants. It is not like a piano, whose notes automatically sound when the keys are pressed, but a living organism with a will of its own. Conducting, then, is first and foremost an act of persuasion, as Mauceri acknowledges:
The person who stands before a symphony orchestra is charged with something both impossible and improbable. The impossible part is herding a hundred musicians to agree on something, and the improbable part is that one does it by waving one’s hands in the air.
This is why so many famous conductors have claimed that the art of conducting cannot be taught. In the deepest sense, they are right. To be sure, it is perfectly possible, as Reiner did, to teach the rudiments of clear stick technique and effective rehearsal practice. But the mystery at the heart of conducting is, indeed, unteachable: One cannot tell a budding young conductor how to cultivate a magnetic personality, any more than an actor can be taught how to have star quality. What sets the Bernsteins and Bogarts of the world apart from the rest of us is very much like what James M. Barrie said of feminine charm in What Every Woman Knows: “If you have it, you don’t need to have anything else; and if you don’t have it, it doesn’t much matter what else you have.”
2 Excerpts from many of these films were woven together into a two-part BBC documentary, The Art of Conducting, which is available on home video and can also be viewed in its entirety on YouTube
Choose your plan and pay nothing for six Weeks!
Not that he tries. What was remarkable about the condescension in this instance was that Franken directed it at women who accused him of behaving “inappropriately” toward them. (In an era of strictly enforced relativism, we struggle to find our footing in judging misbehavior, so we borrow words from the prissy language of etiquette. The mildest and most common rebuke is unfortunate, followed by the slightly more serious inappropriate, followed by the ultimate reproach: unacceptable, which, depending on the context, can include both attempted rape and blowing your nose into your dinner napkin.) Franken’s inappropriateness entailed, so to speak, squeezing the bottoms of complete strangers, and cupping the occasional breast.
Franken himself did not use the word “inappropriate.” By his account, he had done nothing to earn the title. His earlier vague denials of the allegations, he told his fellow senators, “gave some people the false impression that I was admitting to doing things that, in fact, I haven’t done.” How could he have confused people about such an important matter? Doggone it, it’s that damn sensitivity of his. The nation was beginning a conversation about sexual harassment—squeezing strangers’ bottoms, stuff like that—and “I wanted to be respectful of that broader conversation because all women deserve to be heard and their experiences taken seriously.”
Well, not all women. The women with those bottoms and breasts he supposedly manhandled, for example—their experiences don’t deserve to be taken seriously. We’ve got Al’s word on it. “Some of the allegations against me are not true,” he said. “Others, I remember very differently.” His accusers, in other words, fall into one of two camps: the liars and the befuddled. You know how women can be sometimes. It might be a hormonal thing.
But enough about them, Al seemed to be saying: Let’s get back to Al. “I know the work I’ve been able to do has improved people’s lives,” Franken said, but he didn’t want to get into any specifics. “I have used my power to be a champion of women.” He has faith in his “proud legacy of progressive advocacy.” He’s been passionate and worked hard—not for himself, mind you, but for his home state of Minnesota, by which he’s “blown away.” And yes, he would get tired or discouraged or frustrated once in a while. But then that big heart of his would well up: “I would think about the people I was doing this for, and it would get me back on my feet.” Franken recently published a book about himself: Giant of the Senate. I had assumed the title was ironic. Now I’m not sure.
Yet even in his flights of self-love, the problem that has ever attended Senator Franken was still there. You can’t take him seriously. He looks as though God made him to be a figure of fun. Try as he might, his aspect is that of a man who is going to try to make you laugh, and who is built for that purpose and no other—a close cousin to Bert Lahr or Chris Farley. And for years, of course, that’s the part he played in public life, as a writer and performer on Saturday Night Live. When he announced nine years ago that he would return to Minnesota and run for the Senate—when he came out of the closet and tried to present himself as a man of substance—the effect was so disorienting that I, and probably many others, never quite recovered. As a comedian-turned-politician, he was no longer the one and could never quite become the other.
The chubby cheeks and the perpetual pucker, the slightly crossed eyes behind Coke-bottle glasses, the rounded, diminutive torso straining to stay upright under the weight of an enormous head—he was the very picture of Comedy Boy, and suddenly he wanted to be something else: Politics Boy. I have never seen the famously tasteless tearjerker The Day the Clown Cried, in which Jerry Lewis stars as a circus clown imprisoned in a Nazi death camp, but I’m sure watching it would be a lot like watching the ex-funnyman Franken deliver a speech about farm price supports.
Then he came to Washington and slipped right into place. His career is testament to a dreary fact of life here: Taken in the mass, senators are pretty much interchangeable. Party discipline determines nearly every vote they cast. Only at the margins is one Democrat or Republican different in a practical sense from another Democrat or Republican. Some of us held out hope, despite the premonitory evidence, that Franken might use his professional gifts in service of his new job. Yet so desperate was he to be taken seriously that he quickly passed serious and swung straight into obnoxious. It was a natural fit. In no time at all, he mastered the senatorial art of asking pointless or showy questions in committee hearings, looming from his riser over fumbling witnesses and hollering “Answer the question!” when they didn’t respond properly.
It’s not hard to be a good senator, if you have the kind of personality that frees you to simulate chumminess with people you scarcely know or have never met and will probably never see again. There’s not much to it. A senator has a huge staff to satisfy his every need. There are experts to give him brief, personal tutorials on any subject he will be asked about, writers to write his questions for his committee hearings and an occasional op-ed if an idea strikes him, staffers to arrange his travel and drive him here or there, political aides to guard his reputation with the folks back home, press aides to regulate his dealings with reporters, and legislative aides to write the bills should he ever want to introduce any. The rest is show biz.
Oddly, Franken was at his worst precisely when he was handling the show-biz aspects of his job. While his inquisitions in committee hearings often showed the obligatory ferocity and indignation, he could also appear baffled and aimless. His speeches weren’t much good, and he didn’t deliver them well. As if to prove the point, he published a collection of them earlier this year, Speaking Franken. Until Pearl Harbor, he’d been showing signs of wanting to run for president. Liberal pundits were talking him up as a national candidate. Speaking Franken was likely intended to do for him what Profiles in Courage did for John Kennedy, another middling senator with presidential longings. Unfortunately for Franken, Ted Sorensen is still dead.
The final question raised by Franken’s resignation is why so many fellow Democrats urged him to give up his seat so suddenly, once his last accuser came forward. The consensus view involved Roy Moore, in those dark days when he was favored to win Alabama’s special election. With the impending arrival of an accused pedophile on the Republican side of the aisle, Democrats didn’t want an accused sexual harasser in their own ranks to deflect what promised to be a relentless focus on the GOP’s newest senator. This is bad news for any legacy Franken once hoped for himself. None of his work as a senator will commend him to history. He will be remembered instead for two things: as a minor TV star, and as Roy Moore’s oldest victim.
Choose your plan and pay nothing for six Weeks!
Review of 'Lioness' By Francine Klagsbrun
Golda Meir, Israel’s fourth prime minister, moved to Palestine from America in 1921, at the age of 22, to pursue Socialist Zionism. She was instrumental in transforming the Jewish people into a state; signed that state’s Declaration of Independence; served as its first ambassador to the Soviet Union, as labor minister for seven years, and as foreign minister for a decade. In 1969, she became the first female head of state in the Western world, serving from the aftermath of the 1967 Six-Day War and through the nearly catastrophic but ultimately victorious 1973 Yom Kippur War. She resigned in 1974 at the age of 76, after five years as prime minister. Her involvement at the forefront of Zionism and the leadership of Israel thus extended more than half a century.
This is the second major biography of Golda Meir in the last decade, after Elinor Burkett’s excellent Golda in 2008. Klagsbrun’s portrait is even grander in scope. Her epigraph comes from Ezekiel’s lamentation for Israel: What a lioness was your mother / Among the lions! / Crouching among the great beasts / She reared her cubs. The “mother” was Israel; the “cubs,” her many ancient kings; the “great beasts,” the hostile nations surrounding her. One finishes Klagsbrun’s monumental volume, which is both a biography of Golda and a biography of Israel in her time, with a deepened sense that modern Israel, its prime ministers, and its survival is a story of biblical proportions.Golda Meir’s story spans three countries—Russia, America, and Israel. Before she was Golda Meir, she was Golda Meyerson; and before that, she was Golda Mabovitch, born in 1898 in Kiev in the Russian Empire. Her father left for America after the horrific Kishinev pogrom in 1903, found work in Milwaukee as a carpenter, and in 1906 sent for his wife and three daughters, who escaped using false identities and border bribes. Golda said later that what she took from Russia was “fear, hunger and fear.” It was an existential fear that she never forgot.
In Milwaukee, Golda found socialism in the air: The city had both a socialist mayor and a socialist congressman, and she was enthralled by news from Palestine, where Jews were living out socialist ideals in kibbutzim. She immersed herself in Poalei Zion (Workers of Zion), a movement synthesizing Zionism and socialism, and in 1917 married a fellow socialist, Morris Meyerson. As soon as conditions permitted, they moved to Palestine, where the marriage ultimately failed—a casualty of the extended periods she spent away from home working for Socialist Zionism and her admission that the cause was more important to her than her husband and children. Klagsbrun writes that Meir might appear to be the consummate feminist: She asserted her independence from her husband, traveled continually and extensively on her own, left her husband and children for months to pursue her work, and demanded respect as an individual rather than on special standards based on her gender. But she never considered herself a feminist and indeed denigrated women’s organizations as reducing issues to women’s interests only, and she gave minimal assistance to other women. Klagsbrun concludes that questions about Meir as a feminist figure ultimately “hang in the air.”
Her American connection and her unaccented American English became strategic assets for Zionism. She understood American Jews, spoke their language, and conducted many fundraising trips to the United States, tirelessly raising tens of millions of dollars of critically needed funds. David Ben-Gurion called her the “woman who got the money which made the state possible.” Klagsbrun provides the schedule of her 1932 trip as an example of her efforts: Over the course of a single month, the 34-year-old Zionist pioneer traveled to Kansas City, Tulsa, Dallas, San Antonio, Los Angeles, San Francisco, Seattle, and three cities in Canada. She became the face of Zionism in America—“The First Lady,” in the words of a huge banner at a later Chicago event, “of the Jewish People.” She connected with American Jews in a way no other Zionist leader had done before her.
In her own straightforward way, she mobilized the English language and sent it into battle for Zionism. While Abba Eban denigrated her poor Hebrew—“She has a vocabulary of two thousand words, okay, but why doesn’t she use them?”—she had a way of crystallizing issues in plainspoken English. Of British attempts to prevent the growth of the Jewish community in Palestine, she said Britain “should remember that Jews were here 2,000 years before the British came.” Of expressions of sympathy for Israel: “There is only one thing I hope to see before I die, and that is that my people should not need expressions of sympathy anymore.” And perhaps her most famous saying: “Peace will come when the Arabs love their children more than they hate us.”
Once she moved to the Israeli foreign ministry, she changed her name from Meyerson to Meir, in response to Ben-Gurion’s insistence that ministers assume Israeli names. She began a decade-long tenure there as the voice and face of Israel in the world. At a Madison Square Garden rally after the 1967 Six-Day War, she observed sardonically that the world called Israelis “a wonderful people,” complimented them for having prevailed “against such odds,” and yet wanted Israel to give up what it needed for its self-defense:
“Now that they have won this battle, let them go back where they came from, so that the hills of Syria will again be open for Syrian guns; so that Jordanian Legionnaires, who shoot and shell at will, can again stand on the towers of the Old City of Jerusalem; so that the Gaza Strip will again become a place from which infiltrators are sent to kill and ambush.” … Is there anybody who has the boldness to say to the Israelis: “Go home! Begin preparing your nine and ten year olds for the next war, perhaps in ten years.”
The next war would come not in ten years, but in six, and while Meir was prime minister.
Klagsbrun’s extended discussion of Meir’s leadership before, during, and after the 1973 Yom Kippur War is one of the most valuable parts of her book, enabling readers to make informed judgments about that war and assess Meir’s ultimate place in Israeli history. The book makes a convincing case that there was no pre-war “peace option” that could have prevented the conflict. Egypt’s leader, Anwar Sadat, was insisting on a complete Israeli withdrawal before negotiations could even begin, and Meir’s view was, “We had no peace with the old boundaries. How can we have peace by returning to them?” She considered the demand part of a plan to push Israel back to the ’67 lines “and then bring the Palestinians back, which means no more Israel.”
A half-century later, after three Israeli offers of a Palestinian state on substantially all the disputed territories—with the Palestinians rejecting each offer, insisting instead on an Israeli retreat to indefensible lines and recognition of an alleged Palestinian “right of return”—Meir’s view looks prescient.
Klagsbrun’s day-by-day description of the ensuing war is largely favorable to Meir, who relied on assurances from her defense minister, Moshe Dayan, that the Arabs would not attack, and assurances from her intelligence community that, even if they did, Israel would have a 48-hour notice—enough time to mobilize the reserves that constituted more than 75 percent of its military force. Both sets of assurances proved false, and the joint Egyptian-Syrian attack took virtually everyone in Israel by surprise. Dayan had something close to a mental breakdown, but Meir remained calm and in control after the initial shock, making key military decisions. She was able to rely on the excellent personal relationships she had established with President Nixon and his national security adviser, Henry Kissinger, and the critical resupply of American arms that enabled Israel—once its reserves were called into action—to take the war into Egyptian and Syrian territories, with Israeli forces camped in both countries by its end.
Meir had resisted the option of a preemptive strike against Egypt and Syria when it suddenly became clear, 12 hours before the war started, that coordinated Egyptian and Syrian attacks were coming. On the second day of the war, she told her war cabinet that she regretted not having authorized the IDF to act, and she sent a message to Kissinger that Israel’s “failure to take such action is the reason for our situation now.” After the war, however, she testified that, had Israel begun the war, the U.S. would not have sent the crucial assistance that Israel needed (a point on which Kissinger agreed), and that she therefore believed she had done the right thing. A preemptive response, however, or a massive call-up of the reserves in the days before the attacks, might have avoided a war in which Israel lost 2,600 soldiers—the demographic equivalent of all the American losses in the Vietnam War.
It is hard to fault Meir’s decision, given the erroneous information and advice she was uniformly receiving from all her defense and intelligence subordinates, but it is a reminder that for Israeli prime ministers (such as Levi Eshkol in the Six-Day War, Menachem Begin with the Iraq nuclear reactor in 1981, and Ehud Olmert with the Syrian one in 2007), the potential necessity of taking preemptive action always hangs in the air. Klagsbrun’s extensive discussion of the Yom Kippur War is a case study of that question, and an Israeli prime minister may yet again face that situation.
The Meir story is also a tale of the limits of socialism as an organizing principle for the modern state. Klagsbrun writes about “Golda’s persistent—and hopelessly utopian—vision of how a socialist society should be conducted,” exemplified by her dream of instituting commune-like living arrangements for urban families, comparable to those in the kibbutzim, where all adults would share common kitchens and all the children would eat at school. She also tried to institute a family wage system, in which people would be paid according to their needs rather than their talents, a battle she lost when the unionized nurses insisted on being paid as professionals, based on their education and experience, and not the sizes of their families.
Socialism foundered not only on the laws of economics and human nature but also in the realm of foreign relations. In 1973, enraged that the socialist governments and leaders in Europe had refused to come to Israel’s aid during the Yom Kippur War, Meir convened a special London conference of the Socialist International, attended by eight heads of state and a dozen other socialist-party leaders. Before the conference, she told Willy Brandt, Germany’s socialist chancellor, that she wanted “to hear for myself, with my own ears, what it was that kept the heads of these socialist governments from helping us.”
In her speech at the conference, she criticized the Europeans for not even permitting “refueling the [American] planes that saved us from destruction.” Then she told them, “I just want to understand …what socialism is really about today”:
We are all old comrades, long-standing friends. … Believe me, I am the last person to belittle the fact that we are only one tiny Jewish state and that there are over twenty Arab states with vast territories, endless oil, and billions of dollars. But what I want to know from you today is whether these things are the decisive factors in Socialist thinking, too?
After she concluded her speech, the chairman asked whether anyone wanted to reply. No one did, and she thus effectively received her answer.
One wonders what Meir would think of the Socialist International today. On the centenary of the Balfour Declaration last year, the World Socialist website called it “a sordid deal” that launched “a nakedly colonial project.” Socialism was part of the cause for which she went to Palestine in 1921, and it has not fared well in history’s judgment. But the other half—
Zionism—became one of the great successes of the 20th century, in significant part because of the lifelong efforts of individuals such as she.
Golda Meir has long been a popular figure in the American imagination, particularly among American Jews. Her ghostwritten autobiography was a bestseller; Ingrid Bergman played her in a well-received TV film; Anne Bancroft played her on the Broadway stage. But her image as the “71-year old grandmother,” as the press frequently referred to her when she became prime minister, has always obscured the historic leader beneath that façade. She was a woman with strengths and weaknesses who willed herself into half a century of history. Francine Klagsbrun has given us a magisterial portrait of a lioness in full.
Choose your plan and pay nothing for six Weeks!
Back in 2016, then–deputy national-security adviser Ben Rhodes gave an extraordinary interview to the New York Times Magazine in which he revealed how President Obama exploited a clueless and deracinated press to steamroll opposition to the Iranian nuclear deal. “We created an echo chamber,” Rhodes told journalist David Samuels. “They”—writers and bloggers and pundits—“were saying things that validated what we had given them to say.”
Rhodes went on to explain that his job was made easier by structural changes in the media, such as the closing of foreign bureaus, the retirement of experienced editors and correspondents, and the shift from investigative reporting to aggregation. “The average reporter we talk to is 27 years old, and their only reporting experience consists of being around political campaigns,” he said. “That’s a sea change. They literally know nothing.”
And they haven’t learned much. It was dispiriting to watch in December as journalists repeated arguments against the Jerusalem decision presented by Rhodes on Twitter. On December 5, quoting Mahmoud Abbas’s threat that moving the U.S. Embassy to Jerusalem would have “dangerous consequences,” Rhodes tweeted, “Trump seems to view all foreign policy as an extension of a patchwork of domestic policy positions, with no regard for the consequences of his actions.” He seemed blissfully unaware that the same could be said of his old boss.
The following day, Rhodes tweeted, “In addition to making goal of peace even less possible, Trump is risking huge blowback against the U.S. and Americans. For no reason other than a political promise he doesn’t even understand.” On December 8, quoting from a report that the construction of a new American Embassy would take some time, Rhodes asked, “Then why cause an international crisis by announcing it?”
Rhodes made clear his talking points for the millions of people inclined to criticize President Trump: Acknowledging Israel’s right to name its own capital is unnecessary and self-destructive. Rhodes’s former assistant, Ned Price, condensed the potential lines of attack in a single tweet on December 5. “In order to cater to his political base,” Price wrote, “Trump appears willing to: put U.S. personnel at great risk; risk C-ISIL [counter-ISIL] momentum; destabilize a regional ally; strain global alliances; put Israeli-Palestinian peace farther out of reach.”
Prominent media figures happily reprised their roles in the echo chamber. Susan Glasser of Politico: “Just got this in my in box from Ayman Odeh, leading Arab Israeli member of parliament: ‘Trump is a pyromaniac who could set the entire region on fire with his madness.’” BBC reporter Julia Merryfarlane: “Whether related or not, everything that happens from now on in Israel and the Pal territories will be examined in the context of Trump signaling to move the embassy to Jerusalem.” Neither Rhodes nor Price could have asked for more.
Network news broadcasts described the president’s decision as “controversial” but only reported on the views of one side in the controversy. Guess which one. “There have already been some demonstrations,” reported NBC’s Richard Engel. “They are expected to intensify, with Palestinians calling for three days of rage if President Trump goes through with it.” Left unmentioned was the fact that Hamas calls for days of rage like you and I call for pizza.
Throughout Engel’s segment, the chyron read: “Controversial decision could lead to upheaval.” On ABC, George Stephanopoulos said, “World leaders call the decision dangerous.” On CBS, Gayle King chimed in: “U.S. allies and leaders around the world say it’s a big mistake that will torpedo any chance of Middle East peace.” Oh? What were the chances of Middle East peace prior to Trump’s speech?
On CNN, longtime peace processor Aaron David Miller likened recognizing Jerusalem to hitting “somebody over the head with a hammer.” On MSNBC, Chris Matthews fumed: “Deaths are coming.” That same network featured foreign-policy gadfly Steven Clemons of the Atlantic, who said Trump “stuck a knife in the back of the two-state process.” Price and former Obama official Joel Rubin also appeared on the network to denounce Trump. “American credibility is shot, and in diplomacy, credibility relies on your word, and our word is, at this moment, not to be trusted from a peace-process perspective, certainly,” Rubin said. This from the administration that gave new meaning to the words “red line.”
Some journalists were so devoted to Rhodes’s tendentious narrative of Trump’s selfishness and heedlessness that they mangled the actual story. “He had promised this day would come, but to hear these words from the White House was jaw-dropping,” said Martha Raddatz of ABC. “Not only signing a proclamation reversing nearly 70 years of U.S. policy, but starting plans to move the embassy to Jerusalem. No one else on earth has an embassy there!” How dare America take a brave stand for a small and threatened democracy!
In fact, Trump was following U.S. policy as legislated by the Congress in 1995, reaffirmed in the Senate by a 90–0 vote just last June, and supported (in word if not in deed) by his three most recent predecessors as well as the last four Democratic party platforms. Most remarkable, the debate surrounding the Jerusalem policy ignored a crucial section of the president’s address. “We are not taking a position on any final-status issues,” he said, “including the specific boundaries of Israeli sovereignty in Jerusalem, or the resolution of contested borders. Those questions are up to the parties involved.” What we did then was simply accept the reality that the city that houses the Knesset and where the head of government receives foreign dignitaries is the capital of Israel.
However, just as had happened during the debate over the Iran deal, the facts were far less important to Rhodes than the overarching strategic goal. In this case, the objective was to discredit and undermine President Trump’s policy while isolating the conservative government of Israel. Yet there were plenty of reasons to be skeptical toward the disingenuous duo of Rhodes and Price. Trump’s announcement was bold, for sure, but the tepid protests from Arab capitals more worried about the rise of Iran, which Rhodes and Price facilitated, than the Palestinian issue suggested that the “Arab street” would sit this one out.
Which is what happened. Moreover, verbal disagreement aside, there is no evidence that the Atlantic alliance is in jeopardy. Nor has the war on ISIS lost momentum. As for putting “Israeli–Palestinian peace farther out of reach,” if third-party recognition of Jerusalem as Israel’s capital forecloses a deal, perhaps no deal was ever possible. Rhodes and Price would like us to overlook the fact that the two sides weren’t even negotiating during the Obama administration—an administration that did as much as possible to harm relations between Israel and the United States.
This most recent episode of the Trump show was a reminder that some things never change. Jerusalem was, is, and will be the capital of the Jewish state. President Trump routinely ignores conventional wisdom and expert opinion. And whatever nonsense President Obama and his allies say today, the press will echo tomorrow.