Rubinstein the Great Entertainer
When Arthur Rubinstein died last December at the age of ninety-five, there was remarkably little feeling of loss in the…
When Arthur Rubinstein died last December at the age of ninety-five, there was remarkably little feeling of loss in the musical community. As had been the case with his life, Rubinstein’s death too seemed natural, another fulfillment of the kind which appeared (at least to onlookers) always to have been his lot.
Rubinstein enjoyed a long and splendid career. Born in 1887, he was before the public from the 1890’s to the 1970’s, a period beginning with the vigorous manhood of Claude Debussy and ending with the old age of John Cage. Despite this almost unparalleled longevity as a performer, he was never, even during the years of his phenomenal success, perceived as the world’s greatest pianist. During the 1920’s, for example, this title was shared by Josef Hofmann and Sergei Rachmaninoff; from the mid-1930’s to the present day, the undisputed champion has been Vladimir Horowitz. And if the applicable title were to be not the world’s greatest pianist but the world’s greatest musician-pianist, the names of Artur Schnabel, Alfred Cortot, and Edwin Fischer would seem, for most music lovers, beyond compare.
Hofmann, Rachmaninoff, and Horowitz all belong to the class of virtuosi, those who astound by feats of dexterity, lightness, and elegantly applied force. Schnabel, Cortot, and Fischer are regarded as thinkers, those whose musical ideas are always prior to, and more interesting than, mere mechanical execution. Rubinstein, by contrast, did not astonish with his fingers, and he did not inspire with his mind. He did both less and more: he gave pleasure, he made his listeners happy—in a word, he entertained. Not only was this his claim to fame and riches, it is now his claim to our admiration.
The story of Rubinstein’s life is to be found, no doubt often highly embroidered, in two marvelous volumes of memoirs, written in the 1970’s when his failing eyesight and strength made further piano playing difficult.1 These more than one thousand pages tell of a Jewish prodigy from Poland who managed, at the age of three, to impress the great violinist Joachim, friend and adviser to Brahms. From the age of ten, Rubinstein (under Joachim’s guidance) studied in Berlin: piano with Heinrich Barth and theory with Max Bruch. He worked a bit with Paderewski, and, perhaps more important, received a notable éducation sentimentale from numerous women who were only too charmed by his extreme youth and passionate eagerness to please.
Almost out of his teens, Rubinstein began to concertize extensively, though not yet profitably. In 1906 he toured the United States, giving 75 concerts, not very successful, under the sponsorship of the Knabe Piano Company. Residence in Paris brought him into contact with the jeunesse dorée, and some of its aristocratic elders as well. Great names of society and music whizzed in and out of his life; he dined out much more often and more regularly than he practiced the piano. When he visited Poland, he hardly saw his family, choosing instead to pass the time in the great world of Warsaw. And wherever he lived, he was on a kind of dole.
Such a state of affairs was hardly tenable. In 1908, this creature, who had so clearly been born to gladden hearts, attempted suicide in Berlin, the scene of his dreary student days. The attempt itself—at least as he himself describes it—was farcical, but in his memoirs he adds bathos to farce as he tells what happened next:
Then, half-consciously, I staggered to the piano and cried myself out in music. Music, my beloved music, the dear companion of all my emotions, who can stir us to fight, who can inflame in us love and passion, and who can soothe our pains and bring peace to our hearts—you are the one who, on that ignominious day, brought me back to life.
Beyond the self-indulgence, something important had happened. From this day forward, Rubinstein was the man we have always known: “. . . I discovered the secret of happiness and I still cherish it: Love life for better for worse, without conditions.”
Though worldly success was still a few years off, Rubinstein was now ready to receive it. His musical reputation grew, as did his contacts with such famous artists as Leopold Godowsky, Pablo Casals, and Eugène Ysaye. In London, just before and during World War I, he laid the foundation for his later English triumphs. Indeed, it was in London in 1915 that Rubinstein received a concert offer which was to mark the beginning of fame and fortune: an invitation to play the Brahms D minor Concerto in San Sebastián.
_____________
Rubinstein came to Spain and conquered. In his memoirs he is characteristically frank in describing what happened in San Sebastián:
The concert was not well attended; the theater was only half-filled. But my personal success, after this monumental and sober work, was absolutely sensational. No Saint-Saëns, no Liszt, no Chopin, had ever excited a public to that extent.
During the 1916-17 season he gave more than a hundred concerts in Spain. With this kind of success, it was hardly surprising that he soon received an invitation to Argentina. There, and in the rest of South America, he scored a success even greater than in Spain.
For Rubinstein, the 1920’s marked an extraordinary period in which he combined the life of an artist with that of a boulevardier. He immersed himself in the currents of modern art; he was a friend of the French Les Six and of Jean Cocteau, the major influence upon that group. He associated with, and played the music of, Karol Szymanowski, Poland’s greatest 20th-century composer. He performed widely a piano transcription of Stravinsky’s Petrushka which the composer himself had written for him. He became close to Manuel de Falla in Spain and to Heitor Villa-Lobos in Brazil, and he played their works all over the world. De Falla’s “Ritual Fire Dance” from El amor brujo and, to a lesser extent, Villa-Lobos’s short Polichinelle became Rubinstein’s ubiquitous musical signature.
The 1920’s also saw the beginnings of Rubinstein’s prolific career as a maker of phonograph records.2 In a remarkable display of constancy in this age of shifting commercial arrangements, the pianist spent more than a half-century with just two recording companies: first His Master’s Voice in England, then passing on to its affiliate RCA when he became an American resident in the 1940’s. It was his work in the recording studio, combined with the advent of Vladimir Horowitz as a virtuoso technician, that convinced Rubinstein that in performance he needed to do more than just give the spirit of a composition, letting the exact notes fall (as he always had) where they might.
In his personal life, too, Rubinstein was now ready to settle down. In 1932, this confirmed bachelor married a woman half his age. Aniela Mlynarska was the daughter of Emil Mlynarski, the foremost Polish conductor of the day. She provided Rubinstein with a family—they eventually had four children—and the kind of social stability he craved. The ensuing fifty years were a whirlwind of concerts and tours, of elegant homes on both coasts of the United States and in Paris, of endless supplies of wine, song, and lobster, if not (as before) women.
Rubinstein continued to play almost into his nineties. Indeed, it seemed that his appetite for playing, and his strength to indulge the appetite, grew as he himself grew older. In the mid-1950’s, for instance, in a series of five concerts repeated in Paris, London, and New York, he played again seventeen of the concertos he had done over the years: works by Beethoven, Brahms, Chopin, Schumann, Mozart, Liszt, Saint-Saëns, Rachmaninoff, Franck, Grieg, and de Falla. In just a single concert he would do both Brahms concertos or three Beethoven concertos. In 1961, he gave a series of ten solo recitals in New York, playing different pieces on each program.
His last concert was a benefit at Wigmore Hall in London in April 1976. Once again, his memoirs sum up both the moment and his retrospective feelings about it:
As for myself, it was a symbolic gesture; it was in this hall that I had given my first recital in London [in 1912] and playing there for the last time in my life made me think of my whole career in the form of a sonata. The first movement represented the struggles of my youth, the following andante [stood] for the beginning of a more serious aspect of my talent, a scherzo represented well the unexpected great success, and the finale turned out to be a wonderful moving end.
_____________
Now that Rubinstein is dead, it is at last possible to assess his achievement as an artist. No one will dispute that his audiences enjoyed his concerts. But there is a deeper question to be answered: just how well did Rubinstein play?
Perhaps the best place to begin is with some of the numerous and widely available stereo LP recordings Rubinstein made for RCA during the 1950’s, 1960’s, and 1970’s. There are something like one hundred of them, and they cover, with few exceptions, the repertory he played during his lifetime. Here are most of the great romantic concertos and many of the classical works for piano and orchestra; here too are almost all the solo works of Chopin, several of the most popular Beethoven sonatas, some of the most important solo works of Schumann, and a smattering of the earlier 20th-century music Rubinstein played not out of duty but out of liking. And there are numerous examples here of chamber music for piano and strings, a genre which Rubinstein cultivated even at the times of his busiest concert activity.
Listening to these records en masse does make clear just what—in addition to Rubinstein’s infectiously ebullient stage personality—gave his audiences so much pleasure. In his records, one always hears clearly articulated melodies, proudly carried high above their pianistic background. Yet these records also bear out Rubinstein’s reputation among musicians: rarely do the performances seem unique documents either of pure piano-playing or of compelling cerebration.
The records are at their weakest, it seems to me, in performances of pre-Romantic music. Rubinstein’s approach to Mozart, as demonstrated in his concerto recordings, is heavy, often wayward in articulation, and immensely dutiful. It is of some significance, too, that the orchestral background (most likely at Rubinstein’s choice) is romantically sweet and overly full of feeling, rather than classically energetic and astringent as is required if the solo part is to be heard in proper context.
As for Rubinstein’s recordings of the Beethoven concertos, of which the 1960’s set with Erich Leinsdorf and the Boston Symphony and the 1970’s set with Daniel Barenboim and the London Philharmonic are both currently available, they offer clear examples of how far a pianist can go by knowing how the music ought to sound, even if the physical ability necessary to implement this conception is rapidly waning. Not surprisingly, the earlier set, made when the pianist was “only” in his mid-seventies, seems somewhat fresher and less tenuous; the latter, made more than a decade later, suggests the “shipwreck” that Charles de Gaulle called old age. These are painful documents, not least because of the listener’s constant awareness of an intensely captivating personality here defeated by infirmity.
On records as in concert, Rubinstein shied away from the late Beethoven sonatas (though in his much younger days he did play the Sonata in B flat major, opus 106, the “Hammerklavier”). But he did often play such earlier works as the Pathétique (C minor, opus 13) and the “Appassionata” (F minor, opus 57). His early 1960’s recordings of these works, though technically adequate, seem cautious by comparison both with his reputation as a firebrand and with his 1950’s recordings of the same works. For those used to the performances of Beethoven specialists, Rubinstein’s approach will inevitably seem decorative, as if he were bemused by the local beauties of the music rather than concerned to communicate the strong bones of Beethoven’s structures.
_____________
Rubinstein was renowned during his American heyday as a Brahms interpreter, and those fortunate enough to have heard him play the B flat Concerto in concert as late as 1960 will recall the magisterial approach he brought to this work, A 1959 recording with Josef Krips and the RCA Symphony Orchestra and a 1960 concert recording with Witold Rowicki and the Warsaw Philharmonic demonstrate not only how completely Rubinstein identified with this style, at once knotty and luxuriantly romantic, but also how well its technical problems were under his control even as he grew older. By contrast, his last recording of the piece, with Eugene Ormandy and the Philadelphia Orchestra about 1970, though pianistically vastly superior to his final Beethoven concerto efforts, can be no more than a souvenir for those who remember the artist in earlier and better days.
One of the most attractive features of Rubinstein’s Brahms playing was his characteristically rich, deep tone, simultaneously tender and strong. In his concerts, this tone was always in the forefront; though it did not always survive in reproduction on modern records, it can be heard in the numerous short solo pieces of Brahms recorded by Rubinstein in the early days of stereo. The same tone remains in evidence in the pianist’s discs of Schumann, which include the famous A minor Concerto (with Carlo Maria Giulini and the Chicago Symphony) and such solo pieces as the Carneval, the Fantasiestücke opus 12, and the Symphonic Etudes. Here, on 1960’s stereo issues, there is much to admire in sensitivity and the sheer ability to make melodies and harmonies easily discernible by the listener; yet these performances too seem to suffer from a certain digital lethargy, as if the pianist were having trouble getting his fingers and hands up from the keys quickly enough to provide the necessary space between the notes.
_____________
Throughout his American career, Rubinstein was most famous as a Chopinist. His Chopin repertory was enormous, and he drew on it often in his recitals. He recorded Chopin in quantity three times: first on 78-RPM for HMV in England during the 1930’s, then for RCA on mono LP in the 1950’s, and then finally on stereo (again for RCA) in the late 1950’s and early 1960’s. Many of these last records, including the Barcarolle, the Ballades, both Concertos, the Mazurkas, the Nocturnes, the Polonaises, both major Sonatas, and the Waltzes, are still easily available; together they give a coherent picture of Rubinstein’s Chopin playing at the end of his career.
That picture is essentially ruminative, gentle, often introverted, and also often backward in rhythmic impetus. The Chopin presented by the later Rubinstein is a poet rather than a virtuoso, a self-reflecting musician rather than the heroic lion of the keyboard. This essentially miniaturist approach, in Rubinstein’s hands, is capable of producing many felicities; on occasion, as in the Impromptus and the Berceuse, or in the quieter Mazurkas and Nocturnes, it is decidedly effective. But when the music is itself on a larger canvas, as in the Barcarolle and the Polonaises, we are reminded all too often that what we are hearing is an old man’s Chopin, a musical suit cut to fit the cloth of necessary caution.
Even where Rubinstein evidently decides to gamble, to push his fingers beyond their comfortable competence, as in the later recordings of the two Chopin concertos, the result is forced and artificially brilliant; the whole somehow suggests those reproductions of paintings in which special care has been taken to make the colors seem bright and compelling. There is a great difference between this kind of straining after survival and the true art of concerto playing which properly consists in the soloist’s constantly shaping the entire performance, including that of often unresponsive orchestras and conductors. To do this requires a kind of forcefulness Rubinstein clearly no longer possessed.
Enough has been said here to paint the essential outlines of the Rubinstein we can now hear in stereo. His recordings of later music, including the Rachmaninoff C minor Concerto, the Paganini Rhapsody, and the Tchaikowsky B flat major Concerto, are still in the catalogues. The late recording of the Rachmaninoff C minor with Eugene Ormandy and the Philadelphia is notey and tame, altogether inferior to the earlier stereo version (still available) with Fritz Reiner and the Chicago. The Paganini Rhapsody (again with Reiner and the Chicago) is a not very satisfactory account of a score Rubinstein learned relatively late in his life and with which he always had technical trouble. The Tchaikowsky, with Erich Leinsdorf and the Boston, is a routine account of a work which Rubinstein’s arch-rival Vladimir Horowitz made peculiarly his own (in recordings with Toscanini).
If the 1960 performance of the Brahms B flat Concerto is a magisterial document, then one side of a disc containing excerpts from the ten-concert series in New York in 1961 is a document both of the intimate Rubinstein and of Rubinstein the performer of 20th-century music. On this record, no longer available, the pianist plays twelve Visions Fugitives (from opus 22) of Prokofiev and the Próle do Bébé Suite (1918) by Villa-Lobos. His playing treats every note with seriousness, commitment, and, above all, with a plentiful fancy; the result is delectable, and sad too, in its way: still more evidence, if more were needed, of the fundamentally wrong road taken by piano music sometime after the 1920’s.
_____________
It goes without saying that in the last two decades of his life Arthur Rubinstein played magnificently for a man of his age; it also goes without saying that his audiences, to the very end, were conscious of receiving full value. If such factors were all that were relevant in the making of musical judgments, then Rubinstein’s career could now be seen as having reached its greatest triumph at its close.
But more is involved than an audience memory of Arthur Rubinstein, even though that memory is one of pleasure. Although there once was a time when all that was left of an artist’s reputation after his death lay secreted in the fading and inaccurate memories of concert-goers, now everything is different. Proof of that difference has been the very fact that I have been able to examine Rubinstein’s playing not just memory by memory, but by listening to note after note. It is the phenomenon of sound recording which has made this difference; and it is the enormously prestigious Rubinstein recorded archive which requires us to take his playing seriously.
For these recordings are now widely taken as imperishable documents of an authentic tradition; as such, they will continue to shape the expectations and perceptions of audiences. Because this is the way such music is supposed to sound, this is the way audiences will want it to sound. As far as young performers (and their teachers) are concerned, it can be put crassly: here is what succeeded. With this model as a guide, others too may find fame and fortune at the keyboard.
Lest this seem cynical, consider the extent to which today’s pianists, regardless of age, sound old; indeed, the present generation of musicians has turned the adage “wise beyond one’s years” into anything but a compliment. Among pianists, outbursts of brilliance are too often seen as proof of immaturity and unmusicality; every fast tempo and forceful dynamic scheme is taken as a sign of insensitivity. What is the antidote to these artistic shortcomings? Listen to the great, students are told.
Given the synthetic character of creative musical life today, it would be quixotic to ask that audiences and musicians cease listening to recordings. And as for music criticism, whatever else it might or might not be able to do, it can hardly be expected to inspire originality. Critics have little choice but to make distinctions, to point out better and worse. Fortunately, such an act of discrimination is possible in the case of Arthur Rubinstein, for in addition to the records I have been discussing, there is another kind of playing to be heard from this pianist.
I refer to Rubinstein’s earlier recordings. By earlier I do not, for the most part, mean his mono LP discs, or even the many 78’s he made in this country during the 1940’s. Despite the presence among those recordings of several excellent performances—in particular chamber music with Heifetz and Feuermann (later Piatigorsky) and both the Symphonie Concertante and the first four Mazurkas of Szymanowski—Rubinstein’s playing on them often sounds hard and brittle, as if he were attempting to give a perfect performance for the microphone.
Such, indeed, may well have been the case. Much has been made, correctly, of the chilling effect that the meteoric rise of Vladimir Horowitz had on Rubinstein’s perception of his own technical abilities; too little has been said in this connection of the pianist’s own thoughts on the impact of the phenomenon of recording. In the second volume of his memoirs, describing his life in Paris just before the outbreak of World War II, Rubinstein writes:
My readers will certainly be astonished that now I seem to barely mention my music making and my concerts, but to describe long tours, concert by concert with detailed programs, is utterly impossible. All I can say now is that my playing improved considerably, mainly due to the fact that the American public was more demanding than any other, and also to my recordings, which had to be note-perfect and inspired. The result was that I learned to love practicing and to discover new meanings in the works I performed.
But did Rubinstein’s playing improve? The answer—coming, as it can only come, from recordings of this period—must be no. For a reasonably large body exists of Rubinstein recordings made at a still earlier moment, on European if not on American labels; and this body of recordings from before the late 1930’s provides eloquent testimony that Arthur Rubinstein was once a supremely great pianist, with a supremely interesting and exciting personal approach to music.
_____________
Perhaps the earliest of Rubinstein’s HMV records was a disc he made in 1928 of the Schubert Impromptu in A flat major (D. 899, 4) and the Chopin Waltz, again in A flat major, opus 34, no. 1.3 The Schubert is searchingly musical and pianistically magnificent; comparison with the later and now standard recordings of Fischer and Schnabel suggests that only Schnabel was in Rubinstein’s league as a Schubert player. The Chopin is by turns tender, gay, and brilliant; the technical mastery Rubinstein possessed at this time is made startlingly clear in his ability to play difficult decorative figures at extreme speed and with exemplary clarity.
The next record in the HMV numbering series is the Chopin Barcarolle.4 Not only is this performance distinguished by a piano tone beautiful even for Rubinstein (in his memoirs, we are told that it was done on a Blüthner rather than the more likely Steinway or Bech-stein); it also brings together, on a large scale, the same combination of insight and virtuosity the pianist shows in the Schubert and Chopin “miniatures.” Here, in the grandest romatic music, is freedom elevated to the rank of order. And on a purely mechanical level, informed ears will hear on this disc remarkable trills, octaves, and runs.
The next year Rubinstein made a record of his—and his audience’s—beloved Spanish music.5 Navarra and Sevilla of Albeniz are authentic crowd pleasers, and they are also tests both of a pianist’s rhythmic sense and of his ability to play dense chordal masses at great speed without heaviness. Rubinstein succeeds magnificently, and one’s pleasure in the gorgeous color and ease he brings to this music is hardly diminished by the liberties he takes in the Navarra with the exact text Albeniz prescribed.
On the next record of Rubinstein issued by HMV at this time,6 we find an unlikely combination: the Brahms Capriccio in B minor, opus 76, no. 2, and the Debussy Prelude from Book I, La Cathédrale engloutie. Each performance is completely in the character of the music it presents, and one scarcely knows which to admire more, the yoking of a serious approach with a light piano tone in the Brahms or the bell-like clarity of the sonorities in the Debussy.
_____________
Because Rubinstein had made no prior recordings with orchestra, perhaps the greatest interest attaches to his discs of the Brahms B flat Concerto made in 1929 (or 1930: the memoirs are unclear on the matter).7 He seems to have been uncomfortable during the recording sessions, because he was physically separated from the conductor, Albert Coates, with whom in any case he had had no chance to rehearse. Rubinstein wanted the takes destroyed, and one can understand his reasons: he plays wrong notes galore, and the orchestra (the London Symphony) is hardly first-class. But the performance is still extraordinary; Rubinstein plays without caution, as if in full confidence of his ability to get the keys down properly without taking individual aim at each one. There is no point in looking to this recording for the ultimate in realized perfection; there is every reason to cite it as an example of that pianistic attitude of risk and force which must underlie concerto playing.
Much has been written about Rubinstein’s 1931 recording of the Chopin F minor Concerto with John Barbirolli and the London Symphony Orchestra;8 it is enough here to remark that, for those fortunate to own the original 78’s or to have access to the LP reissue, it still sets the standard for richness of tone and intimate force of conception. The recording, probably made the next year, of the Brahms Sonata in D minor for violin and piano, in which Rubinstein appears with his Polish compatriot Paul Kochanski,9 is an extraordinary example of chamber-music playing. Kochanski, sadly an under-recorded violinist, plays beautifully; Rubinstein is able to make soft piano phrases clear without drowning his partner out.
Rubinstein’s recording of the Triana of Albeniz (again, as with the Navarra, in his own version)10 maintains the caliber of his achievement in Spanish music; three Villa-Lobos pieces on the other side of this disc document not just wonderfully attractive music, but also the incredible hand coordination Rubinstein deployed at this time. It is difficult to praise too highly his 1932 recording of all the Chopin Scherzos11 and the 1934-35 discs of the complete Polonaises.12 In their combination of power and beauty they are unrivalled. Exceptional among these performances are those of the B minor Scherzo and the two famous Polonaises, the so-called “Military” in A major and the “Heroic” in Aflat major. Whether one fastens upon the passage work, the repeated chords, the rapid left-hand octaves, or just the sustained cantilena, here is a summit of Chopin playing and of piano playing altogether.
One recording from this period remains to be mentioned. As I suggested earlier, we have grown to associate the Tchaikowsky Concerto in B flat minor with the name of Horowitz; his supercharged performances with Toscanini seem about as far as human capacities can go in the direction of icy brilliance, breakneck excitement, and the extremes of strength and speed. Rubinstein, however, made a recording of this work almost a decade before Horowitz;13 done with Barbirolli and the London Symphony, it was a great seller before (though not after) the first Horowitz album appeared in 1941.
Rubinstein’s performance of the Tchaikowsky on this early recording is lighter than Horowitz’s. Only a little, if at all, slower, it is not so relentlessly driven, and it is a good deal more “romantic.” Indeed, instead of the Horowitz excitement, Rubinstein supplies sentiment. Today, after a generation of pianistic attempts to imitate Horowitz’s daggerlike fingers, it would seem that Rubinstein’s more luxuriant approach wears rather better.
_____________
These early records, taken together, go a long way toward explaining Rubinstein’s success in the concert hall. He provided technique, daring, emotion, tenderness, power, all in about equal measure. Fortunately, evidence of just what he did supply in concert (rather than in the studio) can be found on recordings made live without subsequent editing. In this regard, two performances from the 1940’s stand out. They are both of concertos: one, from 1944, of the Beethoven C minor with Toscanini and the NBC Symphony,14 and the other, from 1947, of the Chopin E minor with Bruno Walter and the New York Philharmonic.15
Here, collaborating with great conductors and orchestras, Rubinstein does indeed prove himself the supreme entertainer among pianists—not because he was a show-off in the manner of a Pavarotti, but because he brought the culture of a great musician to the pleasurable re-creation of the greatest art. Though he gladly accepted the love and homage of the audience, he gave in return an authentic experience of the highest culture of the 19th century. That he did this for so many, and for so many years, is proof enough that in calling him an entertainer one is not denigrating him, but rather raising him far above the pack of applause-mongers whom music-lovers know today as “stars.”
1 My Young Years (1973) and My Many Years (1980).
2 Material has recently come to light, through a letter by James Methuen-Camp-bell in the April 1983 Gramophone, suggesting that Rubinstein made at least one disc for a Polish company around 1910. This record, of the Liszt Twelfth Rhapsody and the Strauss Blue Danube Waltz, is in the possession of the Polish Radio, whence it will doubtless emerge on some appropriate state occasion.
3 DB 1160; currently available on LP as EMI Electrola 1C 151-03 244/5.
4 DB 1161; available during the 1960's as EMI Odeon QALP 10363 (Italy).
5 DB 1257; EMI Electrola 1C 151-03 244/5.
6 DB 1258; available on EMI Electrola IC 151-03 244/45.
7 D 1746/60; available on Supraphon 1010 2856.
8 DB 1494/7; available on EMI Dacapo IC 053-10172.
9 DB 1728/30.
10 DB 1762; available on EMI Odeon QALP 10363 (Italy).
11 DB 1915/8; available on EMI Electrola IC 187-50 357/8.
12 DB 2493/500; available on EMI Electrola IC 187-50 357/8.
13 DB 1731/4.
14 RCA DM 1016 (78 RPM).
15 Bruno Walter Society BWS 740 (private recording).
Rubinstein the Great Entertainer
Must-Reads from Magazine
On His Watch
The meltdown of Syria. The rise of ISIS. The worst refugee crisis of our time. Homegrown terror in the United States.
hree days after ISIS’s mass-casualty assault on Paris, Barack Obama proclaimed that the U.S. policy he had authorized to defeat the terrorist organization was nonetheless working. “We have the right strategy,” he told reporters who had come with him to Turkey for the G-20 Summit, “and we’re gonna see it through.” The international press was incredulous. The president seemed to be standing behind his claim, made the day before the attacks, that ISIS was “contained.” How could Obama still say that the fight was succeeding? Reporters fired back with a series of questions. An AFP correspondent set the tone: “One hundred and twenty-nine people were killed in Paris on Friday night,” he said. “ISIL claimed responsibility for the massacre, sending the message that they could now target civilians all over the world. The equation has clearly changed. Isn’t it time for your strategy to change?” It was the thought on everyone’s mind—and it seemed to offend the leader of the free world. He became impatient, and assured one journalist after another he was correct. By the time CNN’s Jim Acosta asked bluntly, “Why can’t we take out these bastards?” Obama was in high dudgeon. “If folks want to pop off and have opinions about what they think they would do, present a specific plan,” he said. “If they think that somehow their advisers are better than the chairman of my joint chiefs of staff and the folks who are actually on the ground, I want to meet them. And we can have that debate.” Eighteen days later, on December 2, U.S. citizen Syed Farook and his Pakistani wife, Tashfeen Malik, shot up a party at the Inland Regional Center in San Bernardino, California. They killed 14 people, wounded 21 others, and were discovered to have built an arsenal of pipe bombs in their apartment. As information on the couple trickled in that Wednesday afternoon, Obama was giving an interview to CBS News about national security. “ISIL will not pose an existential threat to us. They are a dangerous organization like al-Qaeda was, but we have hardened our defenses,” he said. “The American people should feel confident that, you know, we are going to be able to defend ourselves and make sure that, you know, we have a good holiday and go about our lives.” Two days later, authorities discovered that Malik had pledged fealty to ISIS leader Abu Bakr al-Baghdadi.
It is no longer in dispute that the president has been overtaken by events. While he alternately scolds and reassures, ISIS fights on, gaining power and claiming lives.
But Obama has not been blindsided; he has chosen policies that have emboldened ISIS and has rejected other options at every turn. In fact, his words in Turkey were patently false. Obama doesn’t need an introduction to those who would have done things differently; he knows them well. They include two of his secretaries of defense, his former under secretary of defense, his former secretary of state, his former head of the CIA, his former Army chief of staff, the last commanding general of forces in Iraq, his former ambassador to Syria, his former deputy national-security adviser, and, yes, even his former joint chiefs chairman—among others.
To the many officials, civilian and military, who have opposed Obama on strategy pertaining to Iraq, Syria, and ISIS, his remonstrance in Turkey was surely surreal. Posturing aside, Obama has rejected or marginalized virtually all dissent on these issues. And as a result of his persistent obstinacy, he has chosen poorly again and again, creating a linked set of escalating crises. They began with the misguided U.S. departure from Iraq. They continued with the meltdown of Syria and Obama’s persistently botched responses to it. And they have reached their apogee (so far) with the creation of more than 4 million refugees—the worst humanitarian catastrophe of our age—and ISIS’s establishment of an Islamic caliphate of increasing global reach.
Despite the president’s effort to frame his policies as coolly pragmatic, his decisions on Iraq, Syria, and ISIS fit a strict, even unbending, ideological pattern. His animating motivation has been to retract American power from the region and establish a new national consensus to ensure that the United States pursues a more humble foreign policy in the future.
This is a principled position, of a kind. It reflects a long-held belief in certain quarters that American military action in far-off lands and American meddling in those lands tend to do more harm than good, sowing dangerous resentment abroad.
But when a leader fails to balance this (or any) outlook against facts on the ground, principle becomes theology. And that is the situation in which the president now finds himself.
Obama’s inconsistencies have helped him evade traditional ideological labels. So perhaps it suffices to say he is foremost an anti-Bushist. His conception of America’s role in the world is most easily discerned in its opposition to that of his predecessor. He ran for president on a promise to end the war in Iraq—and when, as president, he told a Saudi Arabian news station, “all too often the United States starts by dictating,” he was talking about George W. Bush’s perceived “cowboy diplomacy.” When he told an audience in France that “America has shown arrogance and been dismissive, even derisive,” he was referring to Bush’s willingness to wage war without the support of the United Nations. And when in London he said, “With my election and the early decisions that we’ve made…you’re starting to see some restoration of America’s standing in the world,” he was touting his departure from Bush-era policy.
What Bush wrought he would undo. And he has undone much.
As conditions in the Middle East have deteriorated, the United States has progressively lost opportunities to act. The rush of events has now mooted many of the ideas Obama rejected. The actions that could have been taken to ensure that a functioning Iraq didn’t fall back into the hands of terrorists no longer apply now that ISIS controls massive sections of the country. The actions that could have contained the damage from a secular Syrian rebellion no longer have bearing on what has become an international war zone. And the actions that could have stopped a few hundred jihadists who crossed Iraq’s western border into Syria no longer matter, now that their number has grown to a few hundred thousand who have founded a state. Our viable options for defeating ISIS today are far more hazardous than the options we had only a few years ago, when we could have preempted its ascendance. But Obama has held fast—and in his effort to keep America out of the Middle East muck, he may well be ensuring an American reentry into a Middle East inferno.

resident Obama’s first order of business was bringing the Iraq War to a close. That was his signature campaign promise, and one cannot fault him for trying to fulfill it. But ending the war in the way he did would prove to be a serious mistake. Whatever one thinks about the invasion of Iraq in 2003, the Iraq that Obama chose to abandon had been all but pacified. In 2011, the final year U.S. troops were on the ground, there were 54 American deaths in Iraq, a wartime low. The country suffered sectarian tensions, but nothing like those that had led to civil war in 2006. Most crucial was this: Coalition actions had defeated ISIS’s brutal predecessor, Al-Qaeda in Iraq. The Iraqi jihad had become a bad memory.
But few of those close to the fight thought these achievements would be self-sustaining. Top Defense Department officials and military brass spent two years arguing for a continued U.S. presence in Iraq to ensure that the country didn’t relapse. Obama’s first secretary of defense, Robert Gates, was one such official. He hoped to leave 16,000 troops behind to consolidate American gains. Gates’s successor, Leon Panetta, had the same concerns about abandoning Iraq and tried to make his case to Obama. As he later wrote:
My fear, as I voiced to the president and others, was that if the country split apart or slid back into the violence that we’d seen in the years immediately following the U.S. invasion, it could become a new haven for terrorists to plot attacks against the U.S. Iraq’s stability was not only in Iraq’s interest but also in ours. I privately and publicly advocated for a residual force that could provide training and security for Iraq’s military.
So had others. Lloyd Austin, the last commanding general of forces in Iraq (and future commander of United States Central Command) recommended a residual American force of 23,000. Army Chief of Staff General Ray Odierno had made similar arguments in 2009, suggesting the U.S. keep 30,000–35,000 troops in Iraq after 2011. These were hardly minority opinions. At a 2011 Senate Armed Service Committee hearing, Senator John McCain asked Joint Chiefs of Staff Chairman General Martin Dempsey whether any military commanders supported a complete withdrawal of U.S. troops. “No, Senator,” Dempsey responded. “None of us recommended that we completely withdraw from Iraq.” Their objections were to no avail.
Obama, certain in his purpose, would take his first step toward inadvertently facilitating a jihadist renaissance.
When it came time to negotiate an extension on the U.S. Status of Forces agreement with Iraq, Obama didn’t secure a deal to keep American troops in the country. The president has claimed that he simply came up against Iraqi intransigence. But as Panetta explains, “Privately, the various leadership factions in Iraq all confided that they wanted some U.S. forces to remain as a bulwark against sectarian violence.” In fact, they wanted it more than Obama. Panetta writes that “Under Secretary of Defense Michèle Flournoy did her best to press [our] position, which reflected not just my views but also those of the military commanders in the region and the joint chiefs. But the president’s team at the White House pushed back, and the differences occasionally became heated.”
In the end, continues Panetta, “those on our side viewed the White House as so eager to rid itself of Iraq that it was willing to withdraw rather than lock in arrangements that would preserve our influence and interests.”
Theology prevailed. In December 2010, Obama declared the war over. “We’re leaving behind a sovereign, stable, and self-reliant Iraq,” he said. But without the United States present to exercise its leverage over then–Prime Minister Nouri al-Maliki, things immediately deteriorated. Maliki, a Shiite, began systematically cracking down on the country’s Sunnis. The Sunnis in turn were thrown into the arms of a revitalized Al-Qaeda in Iraq, which was fast exploiting the absence of American security. During this jihadist revival, militants freed one Abu Bakr al-Baghdadi from a Mosul jail. He would go on to become the leader of ISIS. By 2011, Iraq’s radicals were already spreading into Syria and capitalizing on a civil war that had begun months earlier. All the warnings that had gone unheeded were proving correct. But even then, no one envisioned just how massive the new jihadist threat would become.

nlike the Iraq War, the Syrian horror is entirely a creature of the Obama years. And here we have a much longer record of the ideas Obama rejected, the policies he chose, and the increasingly malignant repercussions of those choices.
One year into the Syrian civil war, dictator Bashar al-Assad had killed roughly 7,800 Syrians and the fighting had produced an estimated 35,000 refugees. The Obama administration had already called for Assad to step down, but had done nothing to make that happen. At the time, the central U.S. concerns were protecting Syrians from Assad’s onslaught and preventing the outbreak of a larger, destabilizing conflict. In March 2012, John McCain took to the Senate floor and made a half-hour speech calling for U.S.-led air strikes on Assad’s forces and the establishment of safe havens for Syrians under attack. McCain also appealed personally to Obama. “I told the president. I said, Bashar Assad is slaughtering people,” he later told PBS. “We are watching genocide take place, and it is eventually going to destabilize the entire region.”
At the time, McCain didn’t have much support. He was the first senator to call for U.S. force against Assad. And given his own defeat at Obama’s hands in the 2008 election and his growing unpopularity with the Republican base, he stood his ground alone. It is unquestionably true that American military action in the Middle East is and will always be risky and problematic. The region’s pathologies ensure a deluge of recriminations against the United States, even from those asking for our help. The pandemic combination of poor governance and sectarian tension increases the chance of clashes following a decisive American strike. And we rarely have a clear sense of friend and foe in lands where parties switch allegiances based on who seems most likely to outlast the latest calamity.
But if statecraft were informed solely by caution, the United States wouldn’t be standing today. There are always compelling reasons to steer clear of combat. A successful foreign policy means accounting for risk in determining what will secure the nation’s interests, not evading risk altogether. McCain’s warning about the coming destabilization was prescient; in any case, it’s hard to imagine that American action would have been worse than the path Obama chose.
After rejecting the first call to intervene in Syria, Obama stuck to inaction (or minimal action), no matter how bad the war got and no matter the nature of the threat it posed.He chose inaction. The president who said he was “elected to end wars, not start them” wasn’t about to go into Syria after pulling out of Iraq. What’s more, European leaders had already dragged Obama against his will into an air campaign against Libyan dictator Muammar Qaddafi a year earlier. Post-Qaddafi Libya was now giving way to chaos, partly because of Obama’s refusal to follow through with further American action. Obama’s anti-Bushism had been compromised by providing air support to Libyan rebels. He wouldn’t see it nullified entirely by going into Syria as well.
But the president also had other reasons for not acting in Syria. He was already working toward détente with Iran. Obama knew that the Iranian leaders were Assad’s closest allies, and he feared American action against Syria would jeopardize his chance for achieving a nuclear deal with Tehran. This too fit his anti-Bushism. Bush had labeled Iran a member of the “Axis of Evil,” a trio of dangerous rogue states that also included North Korea and Saddam Hussein’s Iraq. Obama’s predecessor saw the leaders in Tehran as inflexible theocrats bent on the destruction of Israel and the West. For Bush, the only real solution to the Iran problem was eventual regime change, a toppling of the mullahs, and the establishment of Iranian democracy. Obama, by contrast, sought to treat the Iranians as reasonable actors capable of good-faith negotiations with the United States. With the Iraq War over, diplomacy with Iran became his foreign-policy priority, and his fear of displeasing the mullahs would continue to hamper his Syria policy. Assad and his allies in Tehran took the president’s measure early and, assured of the new American constraint, would escalate the civil war with impunity.
After rejecting the first call to intervene in Syria, Obama stuck to inaction (or minimal action), no matter how bad the war got and no matter the nature of the threat it posed. As he stood pat, that threat changed. When McCain had called for helping the rebels, they were mostly secular Syrians trying to unseat a merciless dictator. The best hope among them was the Free Syrian Army, a non-radical group founded by military defectors seeking to oust Assad and replace his regime with a democratic one. They openly beseeched Washington for help, but Obama’s anti-Bush doctrine left them to fend for themselves.
Around the same time as McCain’s Senate speech, White House Deputy National Security Adviser Ben Rhodes told the New York Times that the U.S. would begin providing “nonlethal assistance, like communications equipment and medical supplies, directly to opposition groups inside Syria.” Another administration official claimed that the U.S. had already begun sending supplies to the Free Syrian Army. But “nonlethal” ultimately meant ineffective. Supplies were meager, slow in coming, and would occasionally be seized by radical groups. Yet the administration would continue to tout such assistance, announcing new “boosts” in aid every year, even as the policy continued to fail. So while the United States stuck to fruitless gestures, the rebels increasingly looked to others who were providing them with tangible support. Those others turned out to be radical Sunni groups, such as al-Qaeda, the al-Nusra front, and ISIS. These trained jihadists were better organized than their non-radical counterparts and some enjoyed lavish funding from Gulf Arab states. The more Obama refused aggressive action, the greater the Islamist hold on the rebels.
As the anti-Assad rebellion morphed into a jihadist call to arms, Washington’s array of policy options narrowed, but they didn’t disappear. A new plan of action came from within the Obama administration in the summer of 2012. Then-director of the CIA, David Petraeus, proposed vetting and arming Syrian rebels covertly from bases inside Jordan. The covert element, he hoped, would allay White House concerns about being seen to meddle in Syrian affairs. Unlike McCain’s early proposition, this plan enjoyed significant support in the administration, from Leon Panetta, Secretary of State Hillary Clinton, Deputy National Security Adviser Denis McDonough, and Samantha Power, who had been handpicked by Obama to head up a new “Atrocities Prevention Board.”
But the president vetoed the Petraeus plan, saying it would draw the United States into the conflict without decisively tipping the scales in favor of the rebels. His concerns here were not unwarranted, but they shouldn’t have been dispositive. A year and a half into the Syrian civil war, Obama didn’t accept that American inaction was itself a meaningful choice. Like action, inaction has real consequences. It gives both our allies and enemies a sense of our priorities, enabling them to recalibrate their plans accordingly. American inaction on Syria ensured that the country’s toxic trends would continue to gain momentum. For Assad, it meant he could wage war with impunity; for the rebels, it meant American help wasn’t coming; and for the jihadists among them, it meant an opportunity to recruit more of their dejected fellow Sunnis.
As things stood in the summer of 2012, the civil-war death toll was around 17,000 and there were more than 150,000 Syrian refugees.

n August 20, Obama held a press conference in the White House that was supposed to center on health care. Asked about Syria, the president gave an ad-libbed answer that would alter the course of history and take the administration on a bizarre foreign-policy detour. “We have been very clear to the Assad regime, but also to other players on the ground, that a red line for us is, we start seeing a whole bunch of chemical weapons moving around or being utilized,” he said. “That would change my calculus. That would change my equation.” Presidential aides were reportedly baffled by Obama’s response, as it didn’t resemble anything they’d heard him say in private. But for all his practiced reticence, Obama had now accidentally warned Assad, on record, that America might intervene if chemical weapons came into play. He had also given the rebels hope. Unplanned or not, this became an opportunity for the United States to get on the right side of the war and thus deprive jihadists of the power they wielded in Syria as the best bet for toppling Assad.
A year later, on August 21, 2013, Assad called Obama’s bluff. The dictator launched a sarin nerve-gas attack in the suburbs of Damascus, killing 1,429 civilians—426 of whom were children. The Obama administration, on the hook to act, announced reprisals. At a press conference in London, Secretary of State John Kerry tried to keep the anti-Bush doctrine together. He described the “unbelievably small, limited kind of effort” the administration had in mind. But, in the end, “unbelievably small” wasn’t small enough for the president. Just days before the planned strike on Syria, Obama found himself too uncertain to give the order. After beginning a speech by saying he had the right as president to act against Syria on his own orders, he declared he was putting it up for a vote in Congress (then in recess).
This decision, it should be noted, went against the majority of Obama’s advisers, who feared the president would be severely weakened by a “no” vote. On the day Congress returned, Kerry gave a press conference and managed to extricate the administration from its dilemma just as accidentally as it had stumbled into it. Kerry said, rhetorically, that Assad could avoid a U.S. strike if he gave up “every bit of his weapons to the international community within the next week, without delay. But he isn’t about to [do that].” That afternoon, Russian Foreign Minister Sergei Lavrov, picking up on Kerry’s comment, announced that Assad had accepted a Russian offer to hand over his chemical stockpile. Thereupon, the administration killed its plans for a strike on Syria.
Under the Russian arrangement, some but not all of Assad’s chemical weapons were shipped out of the country. He has since gone on to use chlorine gas. The plan, however, was a thorough success for the Kremlin, establishing Russia as a massive player in the conflict. At the time, the administration bragged that it had successfully made Syria Moscow’s problem. But Russian President Vladimir Putin would use his new leverage to expand his influence in Syria, eventually bringing Russia fully into the war on Assad’s side, prolonging the dictator’s reign, and further precluding American policy options. For Assad’s part, he was now legitimized as a cooperative partner in disarmament.
As ISIS began redrawing the map of the Middle East, Obama still saw no compelling case for U.S. action and fell back on anti-Bush insinuations to defend his policy.Finally, jihadists inside Syria used the American retreat as a recruiting tool among Sunnis who needed little more convincing that Washington would do nothing to help them. The radicals went into overdrive. And although the Obama administration began arming rebels in lieu of striking Assad, it was much too little and far too late. On the 12th anniversary of 9/11, al-Qaeda leader Ayman al-Zawahiri released a communiqué denouncing the American-affiliated Free Syrian Army. ISIS, by now the strongest jihadist group in Syria, then declared war on what was left of the FSA, fighting it into irrelevance. Once again, Obama’s inaction had become a boon to America’s enemies.
At this point, more than 100,000 had been killed in the civil war and almost 2 million Syrians had been made refugees.

y the start of 2014, ISIS wasn’t merely the strongest of Syria’s jihadist groups; it had become the strongest party among all the country’s rebels. The organization had recently taken control of the city of Raqqa, which became a beacon for foreign fighters pouring into Syria to join ISIS. Yet the president showed little concern, remarking to the New Yorker’s David Remnick in January that “the analogy we use around here sometimes, and I think is accurate, is if a J.V. team puts on Lakers uniforms, that doesn’t make them Kobe Bryant.” That same month, the J.V. jihadists crossed back into Iraq and, with American troops withdrawn on Obama’s promise, seized Fallujah.
As ISIS began redrawing the map of the Middle East, Obama still saw no compelling case for U.S. action and fell back on anti-Bush insinuations to defend his policy. “A strategy that involves invading every country that harbors terrorist networks is naive and unsustainable,” he told a West Point audience in May. A month later, ISIS captured Mosul, the second-largest city in Iraq.
That August, the world was gripped by televised images of desperate men, women, and children trapped on Sinjar mountain in northwestern Iraq. Advancing ISIS forces had surrounded tens of thousands of Yazidis, a Kurdish minority, and were waiting for them below. If the prey came down the mountain they would be slaughtered; if they didn’t, they would die of dehydration. Finally, the United States stepped up. With the world watching, Obama called for air strikes on the ISIS militants and saved the Yazidis from certain death. It was his first bold move against ISIS, and it was a success. Yet he was quick to follow up this show of strength with a disclaimer, saying the United States had no intention of “being the Iraqi air force.” His heroic act was a one-off.
Even as Iraq succumbed to carnage, things in Syria got worse. In June, ISIS declared a new Islamist caliphate and made Raqqa its capital. The organization had also become a rolling wave of sadism, enslaving and killing (sometimes by crucifixion) all who dared stand in its path. In July, the group took over a Syrian army base, beheaded 75 Syrian soldiers, and displayed their heads and bodies in the street. This was merely one of a string of ISIS beheadings that year. In August, ISIS released a video depicting the beheading of the journalist James Foley, the organization’s first American victim.
At this point, the Syrian death toll had risen to 191,000. Refugees numbered 3 million.
One American official could take no more. In May, Robert Ford, the U.S. Ambassador to Syria, stepped down from his post, disgusted with the failure to stop either ISIS or Assad. “I was no longer in a position where I felt I could defend the American policy,” he later said. “We have been unable to address either the root causes of the conflict in terms of the fighting on the ground and the balance on the ground, and we have a growing extremism threat.” Ford had long pushed for giving greater support to the moderate rebels. His was just another dismissed voice of dissent.
In September, with the parade of horrors too great to ignore, Obama expanded the effort to fight ISIS. He called for American air strikes in Syria and announced that the U.S. would begin training and arming moderate Syrian rebels—two years after dismissing David Petraeus’s plan to do so and one year after the Free Syrian Army had ceased to be a viable fighting force. Additionally, Obama would deploy 475 military advisers to Iraq, now that the country was overrun with ISIS militants.

n the 15 months since Obama called for greater action, it has become clear that the United States has still failed to adopt a winning strategy. ISIS has continued to make gains and export terror. Last May, it seized the Iraqi city of Ramadi. The same month, the group took over the ancient Syrian city of Palmyra, killing locals door-to-door and destroying some of the most precious artifacts of multiple civilizations. Obama now says that ISIS is losing territory, but while updated color-coded maps tell different stories on different days, the general trend has been toward expansion. ISIS has also gained significant territory in Libya, Yemen, and South Asia.
Beyond its land claims, ISIS can now boast of a series of successful terrorist attacks. In October, the group killed 102 people in a suicide bombing at the Ankara central train station. That same month, ISIS blew up a Russian passenger plane, Metrojet flight 9268, killing 224 people over the Sinai. On November 12, two ISIS operatives blew themselves up in a Shia suburb of Lebanon, killing about 40 Lebanese. Then came the coordinated attacks in Paris and the San Bernardino shooting.
The strategy that Obama calls a success is, in reality, a combination of half measures and outdated ideas. Our air campaign in Syria has averaged a mere seven strikes a day. Almost 75 percent of planned U.S. bombing runs on ISIS never drop their payloads owing either to insufficient ground intelligence or overly strict rules of engagement. And Obama’s plan to train moderate Syrian rebels has already been retired because there were so few left willing to work with the United States that the program produced only four or five fighters (at a cost of $42 million).
Consider those sad facts in light of the enemy. Whatever language one wishes to use, ISIS now bears an inescapable resemblance to a state. It has established a set of laws and a means of enforcing them on a population of millions. It boasts a capital, designated provinces, and outlying governorates. Between collecting taxes, extorting money, seizing banks, ransoming kidnap victims, and selling oil, ISIS takes in billions of dollars annually. It has training outposts throughout the Middle East and, as we found out on November 13, organized operatives in the West. None of these achievements have taken a serious hit since the president claimed in 2014 he was stepping up the fight.
Last September, Gen. John Allen, the man Obama had picked to lead the coalition fight against ISIS, stepped down from his post. In announcing his exit, Allen cited his wife’s health problems. But it did not go unnoticed that his repeated calls for increased U.S. action had also long been ignored by the White House. Allen wanted to deploy tactical air-control teams in Iraq and establish a safe zone in Syria. Even Obama’s so-called ISIS czar, however, had been unable to persuade the president.
After the Paris attacks, the U.S. increased air strikes, instituted a more permissive targeting policy, and announced that “a specialized expeditionary targeting force” will help Iraqis and Kurds in raids against ISIS. But such measures are mostly cosmetic attempts to dress up a stale policy. They aren’t turning the tide, and they won’t do so any time soon.

bama’s repeated delays have precluded many formerly viable policy options. The rebel-training program is one example. No-fly zones over Syria are another. This plan, rebuffed years ago by the president, is no longer a possibility because of Russia’s new air campaign over the country.
Another concern is that we may have been working from faulty intelligence. The Pentagon’s inspector general is now examining the claims of more than 50 intelligence analysts who came forward in September, charging that their superiors had forced them to alter reports that didn’t portray ISIS as definitively losing. While we await the results of the investigation, we can only wonder who in the chain of command may have been responsible for vetting intelligence for good news. But if the claim is true, it certainly fits in with the culture of the Obama administration.
The White House has refused to see the problem for what it is. It has become clear that Assad and ISIS are complimentary parts of the same nightmare. They are perversely dependent on each other for survival: While ISIS thrives, Assad can play the role of Syria’s “good cop,” effectively offering a choice to those looking on: Do you want me or the apocalyptic army of decapitating slave traders? It’s a role he has exploited to great advantage, and it’s in his interest to keep ISIS in play so long as the world falls for the ploy. At the same time, ISIS can be destroyed only if Assad is taken out of power. So long as Assad is killing Syrians—and he’s killed far more than ISIS has—Sunnis won’t make ISIS their number-one target. The truth is that the United States needs to destroy ISIS and push to depose Assad simultaneously. But with John Kerry attempting to bring Assad and Syrian opposition parties into more talks about “power sharing,” we’re a long way off from getting the policy right. Obama, for his part, has contented himself with berating Americans who are wary of taking in an infinitesimal fraction of the refugees his own policies helped displace. “They are scared of three-year-old orphans,” Obama chided. “That doesn’t seem so tough to me.”
All these issues, however, are but manifestations of the larger encumbering reality: Barack Obama’s theological opposition to exercising effective American power abroad. The president’s inflexibility on that point has nurtured the rise of ISIS and tied our hands in the fight against it. But, with so few prudent options left, his stubbornness may have made a larger conflict with ISIS inevitable, either during the remainder of his term or after it. If so, Obama will have worked for eight years to avert a fate his very actions have summoned.
Today, the president still dismisses significant “boots on the ground” in Iraq and Syria as a nonstarter. On December 6, Obama spoke from the Oval Office, saying, “We should not be drawn once more into a long and costly ground war in Iraq or Syria.” He then added this bizarre coda: “That’s what groups like ISIL want. They know they can’t defeat us on the battlefield.” ISIS wants to engage the United States in a war in order to lose? And we should therefore resist the fight? This is theology outweighing logic.
Perhaps in this period of post-Bush America, however, a ground war against ISIS really is out of the question. But we should be clear about something. ISIS controls vast swaths of land, out in the open. In adopting the structure of a state, the group has given up some measure of the asymmetrical advantage enjoyed by terrorists who traditionally “melt away” into the shadows after an attack; ISIS, in short, can be targeted and defeated like a state. If an American commander in chief cannot even countenance deploying ground soldiers and Marines to defeat a state comprising the worst terrorist threat we’ve ever faced, then we might have finally forfeited our last defense against evil. We are in the final year of a presidency that unwittingly midwifed a monster.
Jeremy Corbyn and the End of the West
The grievous portents of Labour’s extreme new leader
n October 2015, the American novelist Jonathan Franzen gave a talk in London in which he expressed pleasure that Jeremy Corbyn had just been elected leader of Britain’s opposition Labour Party. To his evident surprise, Franzen’s endorsement was met with only scattered applause and then an embarrassed silence. Most of Franzen’s audience were the same sort of people likely to attend a Franzen talk in New York: Upper-middle-class bien pensant Guardian readers who revile the name Thatcher the way a New York Times home-delivery subscriber reviles the name Reagan. For them, as for most Labour members of Parliament, the elevation of Jeremy Corbyn offers little to celebrate. Indeed, it looks a lot like a disaster—a bizarre and potentially devastating epilogue to the shocking rout of the Labour Party at the May 2015 general election. Franzen probably imagined Corbyn to be a kind of British Bernie Sanders, a supposedly lovable old coot-crank leftie willing to speak truth to power—and so assumed that any British metropolitan liberal audience would be packed with his fans. In fact, for all the obvious parallels between the two men, Corbyn is a very different kind of politician working in a very different system and for very different goals. Sanders may call himself a socialist, but he is relatively mainstream next to Corbyn, an oddball and an extremist even in the eyes of many British socialists. It may seem extraordinary that a party most observers and pollsters were sure would be brought back to power in 2015—and that has long enjoyed the unofficial support of the UK’s media, marketing, and arts establishments—now looks to be on the verge of disintegration. But even if no one a year ago could have predicted the takeover of the party by an uncharismatic extreme-left backbencher with a fondness for terrorists and anti-Semites, the Labour Party might well be collapsing due to economic and social changes that have exposed its own glaring internal contradictions.
The first stage of Labour’s meltdown was its unexpected defeat at the general election in May 2015. The experts and the polls had all predicted a hung Parliament and the formation of a coalition government led by Labour’s then-leader, Ed Milliband. But Labour lost 26 seats, was wiped out by nationalists in its former heartland of Scotland, and won less than 30 percent of the popular vote. The Liberal Democrats, the third party with whom Milliband had hoped to form a coalition, did far worse. Meanwhile the populist, anti-EU, anti-mass immigration, UK Independence Party (UKIP) won only one seat in the House of Commons but scored votes from some 3 million people—and took many more voters from Labour than from the Tories.
Milliband’s complacency about and ignorance of the concerns of ordinary working-class people played a major role in the defeat. So did his failure to contest the charge that Labour’s spendthrift ways under Tony Blair had made the 2008 financial crisis and recession much worse. Perhaps even more devastating was the widespread fear in England that Milliband would make a deal with Scottish nationalists that would require concessions such as getting rid of Britain’s nuclear deterrent. He had promised that he would never do this, but much of the public seemed to doubt the word of a man so ambitious to be prime minister that he had stabbed his own brother in the back. (David Milliband was set to take over the leadership of the party in 2010 when his younger brother, Ed, decided to challenge him from the left with the help of the party’s trade unionists.)
In the old industrial heartlands of the North and Midlands, Labour seemed at last to be paying a price for policies on immigration and social issues anathematic to many in the old British working class. As a workers’ party as well as a socialist party, and one that draws on a Methodist as well as a Marxist tradition, Labour has always had to accommodate some relatively conservative, traditional, and even reactionary social and political attitudes prevalent among the working classes (among them affection for the monarchy). Today the cultural divisions within the party between middle-class activists, chattering-class liberals, ethnic minority leaders, and the old working class can no longer be papered over.
With the ascension of Tony Blair to the leadership of the party in 1994, Labour began to pursue certain policies practically designed to alienate and drive out traditional working-class Labour voters and replace them not only with ordinary Britons who had grown tired of the nearly two-decade rule of the Tories but also with upper-middle-class opinion leaders attracted to multiculturalism and other fashionable enthusiasms.
Corbyn is a bitter enemy of U.S. “imperialism,” a longtime champion of Third World revolutionary movements, and a sympathizer with any regime or organization, no matter how brutal or tyrannical, that claims to be battling American and Western hegemony.One can even make a kind of quasi-Marxian argument that as the Labour Party has become more bourgeois over the decades, the more it has engaged in what amounts to conscious or unconscious class warfare against the working class it is supposed to represent. One of the first blows it struck was the abolition of the “grammar schools” (selective high schools similar to those of New York City) on the grounds that they were a manifestation of “elitism,” even though these schools gave millions of bright working-class children a chance to go to top universities. Then there was “slum clearance,” which resulted in the breakup and dispersal of strong working-class communities as residents were rehoused in high-rise tower blocks that might have been designed to encourage social breakdown and predation by teenage criminals. But the ultimate act of Labour anti-proletarianism came after the Party was recovering from the defection of working-class voters to Thatcherism and its gospel of opportunity and aspiration. This was the opening of the UK’s borders to mass immigration on an unprecedented scale by Tony Blair’s New Labour. Arguably this represented an attempt to break the indigenous working class both economically and culturally; inevitably, it was accompanied by a demonization of the unhappy indigenous working class as xenophobic and racist.
In the 2015 general election, many classic working-class Labour voters apparently couldn’t bring themselves to betray their tribe and vote Tory—but were comfortable voting for UKIP. This proved disastrous for Labour, which had once been able to count on the support of some two-thirds of working-class voters. But these cultural changes made it impossible for Labour to hold on to its old base in the same numbers. And its new base—the “ethnic” (read: Muslim) vote, a unionized public sector that is no longer expanding, and the middle-class liberals and leftists who populate the creative industries and the universities—is simply not large enough.
Labour should have won the election in 2015; it lost because of its own internal contradictions. Out of the recriminations and chaos that followed the defeat, there emerged Jeremy Corbyn.

o understand who Corbyn is and what he stands for, it helps to be familiar with the fictional character Dave Spart, a signature creation of the satirical magazine Private Eye. Spart is a parody of a left-wing activist with a beard and staring eyes and a predilection for hyperbole, clueless self-pity, and Marxist jargon, which spews forth from his column, “The Alternative Eye.” (He’s like a far-left version of Ed Anger, the fictional right-wing lunatic whose column graced the pages of the Weekly World News supermarket tabloid for decades.) A typical Spart column starts with a line like “The right-wing press have utterly, totally, and predictably unleashed a barrage of sickening hypocrisy and deliberate smears against the activities of a totally peaceful group of anarchists, i.e., myself and my colleagues.”
The column has given birth to the term spartist—which is used in the UK to refer to a type of humorless person or argument from the extreme left. There are thousands of real-life spartists to be found in the lesser reaches of academia, in Britain’s much-reduced trade-union movement, and in the public sector. For such activists, demonstrations and protests are a kind of super hobby, almost a way of life.
The 66-year-old Corbyn is the Ur-spartist. He has always preferred marches and protests and speeches to more practical forms of politics. He was a member of Parliament for 32 years without ever holding any sort of post that would have moved him from the backbenches of the House of Commons to the front. During those three-plus decades, he has voted against his own party more than 500 times. Corbyn only escaped being “deselected” by Tony Blair—the process by which a person in Parliament can be removed from standing for his seat by his own party—because he was deemed harmless.
Many of Corbyn’s obsessions concern foreign policy. He is a bitter enemy of U.S. “imperialism,” a longtime champion of Third World revolutionary movements, and a sympathizer with any regime or organization, no matter how brutal or tyrannical, that claims to be battling American and Western hegemony. Corbyn was first elected to Parliament in 1983, and many of his critics in the Labour Party say he has never modified the views he picked up from his friends in the Trotskyite left as a young activist.
This is not entirely true, because Corbyn, like so much of the British left, has adapted to the post–Cold War world by embracing new enemies of the West and its values—in particular, those whom Christopher Hitchens labeled “Islamofascists.”
One of the qualities that sets spartists like Corbyn apart from their American counterparts is an almost erotic attraction to Islamism. They are fascinated rather than repelled by its call to violent jihad against the West. This is more than anti-Americanism or a desire to win support in Britain’s ghettoized Muslim communities. It is the newest expression of the cultural and national self-loathing that is such a strong characteristic of much progressive opinion in Anglo-Saxon countries—and which underlies much of the multiculturalist ideology that governs this body of opinion.
Many on the British left today have an astonishing ability to overlook, excuse, or even celebrate reactionary and atavistic beliefs and practices ranging from the murder of blaspheming authors to female genital mutilation. Corbyn has long been at the forefront of this tendency, not least in his capacity as longtime chair of Britain’s Stop the War Coalition. STWC is a pressure group that was founded to oppose not the war in Iraq but the war in Afghanistan. It was set up on September 21, 2001, by the Socialist Workers’ Party, with the Communist Party of Great Britain and the Muslim Association of Britain as junior partners. STWC supported the “legitimate struggle” of the Iraqi resistance to the U.S.-led coalition; declines to condemn Russian intervention in Syria and Ukraine; actively opposed the efforts of democrats, liberals, and civil-society activists against the Hussein, Assad, Gaddafi, and Iranian regimes; and has a soft spot for the Taliban.
Corbyn’s career-long anti-militarism goes well beyond the enthusiasm for unilateral nuclear disarmament that was widespread in and so damaging to the Labour Party in the 1980s, and which he still advocates today. He has called for the United Kingdom to leave NATO, argued against the admission to the alliance of Poland and the former Czechoslovakia, and more recently blamed the Ukrainian crisis on NATO provocation. In 2012, he apparently endorsed the scrapping of Britain’s armed forces in the manner of Costa Rica (which has a police force but no military).
As so often with the anti-Western left, however, Corbyn’s dislike of violence and military solutions mostly applies only to America and its allies. His pacifism—and his progressive beliefs in general—tend to evaporate when he considers a particular corner of the Middle East.
Indeed, Corbyn is an enthusiastic backer of some of the most violent, oppressive, and bigoted regimes and movements in the world. Only three weeks after an IRA bombing at the Conservative Party conference in Brighton in 1984 came close to killing Prime Minister Thatcher and wiping out her entire cabinet, Corbyn invited IRA leader Gerry Adams and two convicted terrorist bombers to the House of Commons. Neil Kinnock, then the leader of Labour and himself very much a man of the left, was appalled.
Corbyn is also an ardent supporter of the Chavistas who have wrecked Venezuela and thrown dissidents in prison. It goes almost without saying that he sees no evil in the Castro-family dictatorship in Cuba, and for a progressive he seems oddly untroubled by the reactionary attitudes of Vladimir Putin’s repressive, militarist kleptocracy in Russia.
Then we come to his relationship with Palestinian extremists and terrorists. A longtime patron of Britain’s Palestine Solidarity Committee, Corbyn described it as his “honor and pleasure” to host “our friends” from Hamas and Hezbollah in the House of Commons. If that weren’t enough, he also invited Raed Salah to tea at the House of Commons, even though the Palestinian activist whom Corbyn called “an honored citizen…who represents his people very well” has promoted the blood libel that Jews drink the blood of non-Jewish children. These events prompted a condemnation by Sadiq Khan MP, the Labour candidate for London’s mayoralty and a Muslim of Pakistani origin, who said that Corbyn’s support for Arab extremists could fuel anti-Semitic attacks in the UK.
That was no unrepresentative error. As Britain’s Jewish Chronicle also pointed out this year, Corbyn attended meetings of a pro-Palestinian organization called Deir Yassin Remembered. The group is run by the notorious Holocaust denier Paul Eisen. He is also a public supporter of the Reverend Stephen Sizer, a Church of England vicar notorious for promoting material on social media suggesting 9/11 was a Jewish plot.
Corbyn’s defense has been to say that he meets a lot of people who are concerned about the Middle East, but that doesn’t mean he agrees with their views. The obvious flaw of this dishonest argument is that Corbyn doesn’t make a habit of meeting either pro-Zionists or the Arab dissidents or Muslim liberals who are fighting against tyranny, terrorism, misogyny, and cruelty. And it was all too telling when, in an effort to clear the air, Corbyn addressed the Labour Friends of Israel without ever using the word Israel. It may not be the case that Corbyn himself is an anti-Semite—of course he denies being one—but he is certainly comfortable spending lots of quality time with them.
How could such a person become the leader of one of the world’s most august political parties? It took a set of peculiar circumstances. In the first place, he only received the requisite number of nominations from his fellow MPs to make it possible for him to stand for leader after the resignation of Ed Milliband because some foolish centrists thought his inclusion in the contest would “broaden the debate” and make it more interesting. They had not thought through the implications of a new election system that Milliband had put in place. An experiment in direct democracy, the new system shifted power from the MPs to the members in the country.
The party’s membership had shrunk over the years (as has that of the Tory Party), and so to boost its numbers, Milliband and his people decided to shift to a system in which new members could obtain a temporary membership in the party and take part in the vote for only £3 ($5). More than 100,000 did so. They included thousands of hard-left radicals who regard the Labour Party as a pro-capitalist sell-out. (They also included some Tories, encouraged by columnists like the Telegraph’s Toby Young, who urged his readers to vote for Corbyn in order to make Labour unelectable.) The result was a landslide for Corbyn.
Labour’s leadership was outplayed. The failure was in part generational. There is hardly anyone left in Labour who took part in or even remembers the bitter internal struggle in the late ’40s to find and exclude Communist and pro-Soviet infiltrators—one of the last great Labour anti-Communists, Denis Healey, died this October. (This was so successful that the British Trotskyite movement largely abandoned any attempt to gain power in Westminster, choosing instead to focus on infiltrating the education system in order to change the entire culture.) By the time Corbyn took over, most of Labour’s “modernizers”—those who had participated in the takeover of the party leadership by Tony Blair and his rival and successor Gordon Brown—had never encountered real Stalinists or Trotskyists and lacked the fortitude and ruthless skill to do battle with them.
Unfortunately for the centrists and modernizers, many of Corbyn’s people received their political education in extreme-left political circles, so brutal internal politics and fondness for purges and excommunications are (as Eliza Doolittle said) “mother’s milk” to them. For example: Corbyn’s right-hand men, John McDonnell and Ken Livingstone, were closely linked to a Trotskyite group called the Workers Revolutionary Party. The WRP was a deeply sinister political cult that included among its promoters not only the radical actors Vanessa and Corin Redgrave but also the directors of Britain’s National Theatre. Its creepy leader Gerry Healy was notorious for beating and raping female members of his party and took money from Muammar Gaddafi and Saddam Hussein.
Corbyn’s own front bench has been on the verge of rebellion. And any notion that he would moderate his views quickly dissipated once he began recruiting his team.Most people in British politics, and especially most British liberals, had fallen prey to the comforting delusion that the far left had disappeared—or that what remained of it was simply a grumpy element of Labour’s base rather than a devoted and deadly enemy of the center-left looking for an opportunity to go to war. As Nick Cohen, the author of What’s Left: How the Left Lost Its Way, has pointed out, this complacent assumption enabled the centrists to act as if they had no enemies to the left. Now they know otherwise.
Another reason for the seemingly irresistible rise of Corbyn and his comrades is what you might call Blair Derangement Syndrome. It is hard for Americans and other foreigners to understand what a toxic figure the former prime minister has become in his own country. Not only is he execrated in the UK more than George W. Bush is in the U.S., Blair is especially hated by his own party and on the left generally. It is a hatred that is unreasoning and fervid in almost exact proportion to the adoration he once enjoyed, and it feels like the kind of loathing that grows out of betrayed love. Those in the Labour Party who can’t stand Blair have accordingly rejected many if not all of the changes he wrought and the positions he took. And so, having eschewed Blairism, they were surprised when they lost two elections in a row to David Cameron—who, though a Tory, is basically Blair’s heir.
Blair is detested not because he has used his time after leaving office to pursue wealth and glamour and has become a kind of fixer for corrupt Central Asian tyrants and other unsavory characters. Rather, it is because he managed to win three general elections in a row by moving his party to the center. Those victories and 12 years in office forced the left to embrace the compromises of governance without having much to show for it. This, more than Blair’s enthusiasm for liberal interventionism or his role in the Iraq war or even his unwavering support of Israel during the 2008 Gaza war, drove the party first to select the more leftist of the two Milliband brothers and now hand the reins to Corbyn.

s I write, Corbyn has been Leader of Her Majesty’s loyal opposition (a position with no equivalent in the United States) for a mere 10 weeks—and those 10 weeks have been disastrous both in terms of the polls and party unity. Corbyn’s own front bench has been on the verge of rebellion. Before the vote on the UK’s joining the air campaign in Syria, some senior members apparently threatened to resign from their shadow cabinet positions unless Corbyn moderated his staunch opposition to any British military action against ISIS in Syria. (It worked: Rather than face open revolt, Corbyn allowed a free vote instead of a “whipped” one, and 66 Labour MPs proceeded to vote for air strikes). Any notion that Corbyn’s elevation would prompt him to moderate his views quickly dissipated once he began recruiting his team. His shadow chancellor, John McDonnell, is one of the only people in Parliament as extreme as he. While serving as a London councillor in the 1980s, McDonnell lambasted Neil Kinnock, the relatively hard-left Labour leader defeated by Margaret Thatcher, as a “scab.” A fervent supporter of the IRA during the Northern Ireland troubles, McDonnell endorsed “the ballot, the bullet, and the bomb” and once half-joked that any MP who refused to meet with the “provisionals” running the terror war against Great Britain should be “kneecapped” (the traditional provo punishment involving the shattering of someone’s knee with a shotgun blast). Recently he made the headlines by waving a copy of Mao’s Little Red Book at George Osborne, the Chancellor of the Exchequer. As Nick Cohen has written of Corbyn and his circle: “These are not decent, well-meaning men who want to take Labour back to its roots…they are genuine extremists from a foul tradition, which has never before played a significant role in Labour Party history.”
During Corbyn’s first week as leader, he refused to sing the national anthem at a service commemorating the Battle of Britain, presumably because as a diehard anti-monarchist, he disagrees with the lyric “God save our Queen.” Soon after he declared that as a staunch opponent of Britain’s nuclear arsenal, he would not push the button even if the country were attacked.
He expressed unease at the assassination by drone strike of the infamous British ISIS terrorist “Jihadi John.” Corbyn said it would have been “far better” had the beheader been arrested and tried in court. (He did not say how he envisaged Jihadi John ever being subject to arrest, let alone concede that such a thing could happen only due to military action against ISIS, which he opposes).
Corbyn’s reaction to the Paris attacks prompted fury from the right and despair in his own party. He seemed oddly unmoved and certainly not provoked to any sort of anger by the horror. Indeed, he lost his chance to score some easy points against Prime Minister Cameron’s posturing. Cameron, trying to play tough in the wake of military and policing cuts, announced that British security forces would now “shoot to kill” in the event of a terrorist attack in the UK—as if the normal procedure would be to shoot to wound. Any normal Labour leader of the last seven decades would have taken the prime minister to task for empty rhetoric while reminding the public of Labour’s traditional hard stance against terrorism in Northern Ireland and elsewhere. Instead, Corbyn bleated that he was “not happy” with a shoot-to-kill policy. It was “quite dangerous,” he declared. “And I think can often be counterproductive.”

hile there is no question that Labour has suffered a titanic meltdown, and that Corbyn’s triumph may mean the end of Labour as we know it, it’s not yet clear whether Corbyn is truly as electorally toxic as the mainstream media and political class believe him to be. What some observers within Labour fear is that Corbyn could indeed become prime minister after having transformed the party into a very different organization and having shifted the balance of British politics far to the left.
They concede that there is little chance of Corbyn’s ever winning over the 2–3 million swing voters of “middle England” who have decided recent elections. But they worry that in a rerun of the leadership election, Corbyn might be able to recruit a million or more new, young voters who have no memory of the Cold War, let alone Labour’s failures in the 1970s, and who think that he is offering something fresh and new.
It might not only be naive young people who would vote for Corbyn despite his apparent lack of parliamentary or leadership skills. In Britain, there is a growing disdain for, and distrust of, slick professional politicians—and for good reason. It’s not hard to seem sincere or refreshingly possessed of genuine political convictions if you’re going up against someone like David Cameron, who even more than Tony Blair can exude cynicism, smugness, and a branding executive’s patronizing contempt for the public. The fact that Corbyn is relatively old and unglamorous might also play in his favor; the British public is tired of glib, photogenic, boyish men. Corbyn and McDonnell are “an authentic alternative to the focus-group-obsessed poll-driven policies of the Blair days,” Cohen writes—but it is an authenticity based in “authentic far-left prejudices and hypocrisies.” Those prejudices and hypocrisies could sound a death knell for Britain’s historic role in advancing the Western idea—an idea that is, in large measure, the most glorious handiwork of its sceptered isles.
Bridge of Lies
How and why Hollywood distorts history by filming it with a leftist lens
idway through Steven Spielberg’s Cold War picture Bridge of Spies, the upstanding lawyer Jim Donovan (Tom Hanks) suffers a shocking attack when his Brooklyn house is raked with gunfire. Donovan has been working selflessly and, according to the movie, patriotically, as the legal counsel to the Soviet spy Rudolf Abel (Mark Rylance). Then nothing happens. After the gunfire attack, neither Donovan nor anyone else seems particularly interested in finding the culprits and bringing them to justice. Don’t they fear they’re about to be murdered? Wouldn’t they move out of the house in a panic? Why doesn’t anyone in this movie act at all as real people would—shocked and angry and appalled and maybe more than a little irrational? Those bullets never flew. There is no mention of any such attack in Donovan’s 1964 book Strangers on a Bridge, the primary source material for the movie. It contains a couple of paragraphs of mild irritation describing, for instance, how Donovan felt obliged to obtain an unlisted phone number because drunks were calling to harass him in the middle of the night. Bridge of Spies is a film about the Communist infiltration of the United States, but in Steven Spielberg’s telling, it’s ordinary New Yorkers who seem more repellent than Rudolf Abel as they shoot dirty looks at Donovan when he rides to work on the subway—and then shoot up his home. Bridge of Spies arrived in the fall at the annual moment when Hollywood begins leveraging history in pursuit of awards glory. Three of the past five Oscar winners for Best Picture were based on true stories, and last year four of the eight Best Picture nominees were fictionalized biographies. A foundation of reality serves to elevate a film’s importance, to reassure the filmmaking community that “the Industry,” as it calls itself, at its best produces more than just meretricious assemblages of gross-out gags and superhero exploits. These films are supposedly driven by a didactic purpose that is meant to inform our lives as citizens and moral thinkers. Revisiting historical dilemmas delivers an imprimatur of seriousness to artists who are keenly sensitive to charges that they are in a frivolous business.
And yet anyone who has made a habit of comparing fact-based films with their real-life antecedents can hardly avoid noticing the shamelessness with which Hollywood alters history both for the sake of a better yarn and to suit its political, indeed polemical, purposes. Last year The Imitation Game portrayed the ingratitude and homophobia of the British state as being so extreme that it investigated code-breaking war hero Alan Turing for being a spy and in so doing exposed him as a homosexual. That didn’t happen; in reality, Turing’s sexuality was revealed when he reported a petty theft and lied about the details. At around the same time came the release of Kill the Messenger, a movie about the disgraced San Jose Mercury News reporter Gary Webb. Webb had purported to show that the CIA was behind the 1980s crack epidemic. The film portrayed Webb, who eventually committed suicide, as a martyr to the truth undone by jealous rivals rather than his own egregiously flawed work. The financial-crisis comedy-drama The Big Short, which is based on well-documented history that happened only seven or so years ago, even makes a joke out of the fictional distortion of the record: A character who hits pay dirt when a fellow financier accidentally leaves a sheaf of tantalizing documents lying around turns to the camera and explains that things didn’t really happen this way but it makes for a better story.
This fall, top-tier talent starred in three major prestige projects—Bridge of Spies, Truth, and Trumbo—that present themselves as needful, even urgent, lessons. Each is built on misleading implications, half-truths, and plain old lies. The purpose is in large part to advance a leftist narrative likely to please the nearly unanimously hard-left blocs of voters who bestow the various critical and trade-group awards. And, in part, to make the filmmakers feel as though they are bravely speaking truth to the unenlightened masses, facts be damned. In this way they are analogues to their own subjects as they see them—courageous men and women who stick to their principles no matter how costly that might be and how ugly the forces arrayed against them are.

ridge of Spies is typical Hollywood myth-making in that it is false on two levels. The lesser level is that of incident, of juicing the details to make a more riveting tale and to create a role more attractive for Hanks, who is so wary of playing any characteristic other than likeable, principled, and trustworthy that he is gradually becoming a sort of Madame Tussaud’s wax figure of himself. So: Donovan’s house wasn’t attacked by gunfire, he didn’t witness East Germans getting gunned down at the Berlin Wall, didn’t get mugged for his overcoat by a gang of East German youths, wasn’t harassed by the East German police, and didn’t have to overcome the hostility of the CIA up to and including the moments at the Glienicke Bridge where Donovan secured the release of both the downed U-2 pilot Francis Gary Powers and a young American economics graduate student named Frederic Pryor, who was being held by East Berlin police. In the film, the CIA is so uninterested in Pryor’s release that the agency effectively works at cross-purposes to Donovan, who insists that both men must be freed. “That was the biggest error,” Pryor said this fall. “It didn’t happen like it did in the movie at all.”
Nor did Pryor dramatically get caught in East Berlin while momentarily venturing from West to East to help a woman at the exact moment when the cement and barbed wire of the Wall were hastily being thrown across that section of Berlin. Pryor didn’t even know until last summer that a movie that dramatized events in his life was in the works (Bridge of Spies had already been filmed by then). He hadn’t been allowed to see, much less comment on, the script.
Given his views, one could make the case that nearly all of Spielberg’s work falls into two categories: children’s films and disguised children’s films.The more crucial failing of the film is that it is false in its moral framework. Take the dishonesty implicit in Rylance’s portrayal of Rudolf Abel: The film seems incredulous that the life of this sniffling, stoic, little man hangs in the balance because of abstract world-historical spats. Spielberg can’t make the case that Abel was wrongfully accused (overwhelming is too mild a word to characterize the evidence against him), but he comes across as a lamb in the whirlwind. Spielberg simply has no interest either in what role Abel was attempting to play in committing espionage against the United States or in the broader question of what havoc was wreaked by the clandestine activities of Soviet plants and the American traitors who worked alongside them. We learn from Donovan’s memoir that the government believed Abel was not just a spy but the spy—the man who “for nine years directed the entire Soviet espionage network in North America.”
Spielberg devotes several scenes to the skullduggery of the U-2 spy-plane operations that resulted in the shooting down and capture of pilot Francis Gary Powers in Russia in 1960. The implication is: We spied on them, they spied on us, what’s the difference? As William F. Buckley Jr. used to point out, if one fellow pushes an old lady into the path of an oncoming bus and another pushes her out of the way of same, it won’t do to describe both men as the kind who push old ladies around.
Spielberg and Hanks’s Donovan gives several statements, or sermons, about how the Constitution guarantees the right to legal counsel even for illegal aliens trying to destroy the United States. The Constitution is “what makes us Americans. It’s all that makes us Americans,” Donovan declares. A nice thought, but that still doesn’t obligate Donovan to work for a Soviet agent any more than it obligates any individual lawyer to defend, say, Dylann Roof. If anything, the question of which clients to accept is an issue for ethicists of the Bar, but “I’m defending a spy because the Bar Association asked me to” isn’t quite so resonant a declaration as one that invokes the Constitution. Does Spielberg’s fondness for that document extend to, say, the 10th Amendment? The Second?
Spielberg, like virtually all of Hollywood, either thought Communists in America were a phantom threat (they weren’t), were idealists (not true) or, at worst, that there was little to no moral difference between the Soviet Union and the United States in the Cold War. These are lies on a scale with excusing slavery. They are lies, indeed, comparable to Spielberg’s finding, in Munich, moral equivalence between Mossad and the murderers of the Israeli athletes at the 1972 Summer Olympics. Murderers and their executioners are both guilty of killing, after all. It’s such a jejune view that one could make the case that nearly all of Spielberg’s work falls into two categories: children’s films and disguised children’s films.
More childish than anything in Bridge of Spies, though, is the guffaw-worthy moment in Trumbo, another finger-wagging film, when the Stalinist screenwriter Dalton Trumbo is asked by his daughter what Communism is all about—and he replies that if you were having lunch and noticed someone nearby didn’t have anything to eat, you would of course share your sandwich. As Trumbo well knew, if Josef Stalin had been a lunch lady he would have been the kind who took the sandwiches away from both children and encouraged them to inform on each other before having both of them shot.
Like Bridge of Spies, Trumbo is a film about persecution of Communists in America, and its creators hope that at Oscar time its mediocre quality and mangling of history will be forgiven because of its liberal posturing. Directed by Jay Roach (best known for comedies such as Meet the Parents) and starring Bryan Cranston in an acting performance that may politely be called overemphatic, the film presents its subject, the author of screenplays for films such as Roman Holiday and Spartacus, as a First Amendment hero while his friend Edward G. Robinson (Michael Stuhlbarg) comes across as a craven sycophant for telling the House Un-American Activities Committee things it already knew about the widespread Communist ties in Hollywood.
Trumbo’s pacifist World War I novel Johnny Got His Gun was so beloved by Hitler’s Soviet allies that it was serialized in the Daily Worker in 1940.But Trumbo’s was not actually a First Amendment case. The event that led to Trumbo’s imprisonment and blacklisting was not his speech but his silence when questioned by HUAC. It was textbook Fifth Amendment stuff—except Trumbo refused to plead the Fifth. Doing so would have been tantamount to a public admission of harboring Communist sympathies, which he thought would cost him jobs and possibly his career as a screenwriter (even though his longtime backing of the Party was well known in Hollywood and he was an actual formal member of the Party from 1943 to 1948 and rejoined it again briefly in 1956). Though you could certainly argue that HUAC’s inquiries were improper, the committee members led by New Jersey Congressman J. Parnell Thomas were, for Trumbo, merely the valets who opened the door to the genuine threat. It wasn’t HUAC that Trumbo most feared; it was the studio head Louis B. Mayer. (Trumbo’s lawyer’s petition to have his client receive relief on First Amendment grounds was denied certiorari by the Supreme Court.)
The word blacklist is itself a scary-sounding piece of propaganda popularized by the left to poison the wells on the era. Trumbo and the Hollywood Ten were simply publicly fired by their employers. There was nothing “black” or secretive about it. The Waldorf Agreement, a policy hashed out by leading executives and producers at the hotel of that name in New York City, led to a press release that announced Trumbo and the rest of the Hollywood Ten were no longer considered fit to work at the studios. The left should not still be feigning outrage at this. It’s entirely reasonable for a private company to terminate someone for holding ideas it considers antithetical to the firm’s values, and the left is usually the first to demand someone lose his job for even mild dissent from prevailing norms, much less doing Stalin’s bidding. If it became public knowledge, in 2015, that a screenwriter was once a paid-up member of the KKK, refused to distance himself from it, and indeed was successfully sneaking pro-Klan messages into screenplays, he would instantly find himself unemployable in Hollywood. No one would rush to his defense. The First Amendment would not be at issue; moreover, the cultural pooh-bahs would likely welcome congressional inquiries to investigate the activities of such a profoundly un-American group, even while granting that it’s no crime to hold any particular ideology. Yet Communism in practice did far more damage, cost more lives, and posed a much more serious threat to American values—indeed, to America’s continued existence—than the KKK ever did.
Trumbo’s notion of being honest about its subject extends as far as showing the writer drinking too much and being rude to his children (though both habits, we are made to understand, are due to job pressures), but it is silent on the matter of Trumbo’s principles being flexible when it came to naming names. Which he did. He wrote a letter to the FBI around 1944 identifying anti-war citizens who wrote angrily to him when, after Hitler invaded the Soviet Union, making the Soviets and the U.S. allies, he instantly switched from pacifism to ardent encouragement for U.S. entry into the war. Trumbo published his pacifist World War I novel Johnny Got His Gun in 1939 for the propaganda purposes of scaring Americans off going to war with Germany again. The book was so beloved by Hitler’s Soviet allies that it was serialized in the Daily Worker in 1940. Then, after the party line changed to support war with Germany, Trumbo suspended the book from being reprinted, effectively burying it for the duration. Nor did Trumbo oppose blacklisting per se; as a powerful Hollywood presence, he bragged (in the Daily Worker) that he and other Communists used their gatekeeping privileges to help quash a film version of Arthur Koestler’s Darkness at Noon and other rebukes to Communism.
Trumbo is also blithe about inventing details or making misleading use of them. A feisty, noble, far-left screenwriter dying of lung cancer played by the comedian Louis C.K. is a fictional character. Trumbo did not encounter his House tormentor J. Parnell Thomas in federal prison (though Thomas, who was convicted of fraud, did do time at the same penitentiary where Trumbo’s fellow Hollywood Ten member Ring Lardner Jr. was sent). Trumbo did use Benzedrine and work at a ferocious pace while he was out of favor in Hollywood, but that was pretty much how he worked when he was in favor, too. (“He was evidently as unable to work without the constant, nagging demands of time and money on him as are many newspapermen who can write only to deadline,” writes his biographer Bruce Cook.)
The movie’s portrayal of Trumbo as having no choice but to work for the schlocky, low-budget producer Frank King (played by John Goodman) overstates the extent of Trumbo’s struggle in purgatory. He did sell his L.A. ranch and move to Mexico City—but there he hired a house full of servants and became an avid collector of pre-Columbian art. He did all this while being hounded by the IRS to pay back income taxes on work he did before his HUAC encounter. Trumbo can be said to have been in dire financial straits only if one considers his extravagant lifestyle choices to have been nonnegotiable. For instance, shortly before going to prison for 11 months in 1950 for contempt of Congress, he wrote three treatments in three weeks. One of them sold for $40,000—roughly $400,000 in today’s money—for a week’s work. Trumbo also earned $40,000 (and a posthumous Oscar in 1993) for Roman Holiday—of which, by the way, he was not the sole author, pace the film. His friend Ian McLellan Hunter, who agreed to serve as the credited writer in order to sell the project to Paramount, “greatly improved the script” with a rewrite, Trumbo told Cook.
Even B-movies like the 1956 picture The Boss brought in real money: $7,500. A four-day job rewriting Terror in a Texas Town brought in $1,000 in an era when $25,000 a year was a princely sum. Though Trumbo worked under pseudonyms and used agents to sell his scripts, the network of independent producers knew exactly what he was up to and worked with him directly. And Frank King and his brothers arranged for an investor in their company to provide Trumbo with a beautiful house on a large property in Highland Park. In other words, being blacklisted only modestly altered the career of Dalton Trumbo. He suffered from lost opportunities since he could have produced other work for the studios. But a martyr he was not.
Nor, it hardly need be said, is Dan Rather, the former anchorman of CBS News, but another fall film made that case. In the later stages of the movie that shamelessly calls itself Truth, Rather’s producer, Mary Mapes, is interviewing a Texan named Bill Burkett in 2004 as he relates a preposterous story. Mapes, played by Cate Blanchett, looks at Robert Redford’s Dan Rather and makes the ding-a-ling gesture, circling an index finger around her temple. The moment is played for laughs, but the obtuseness of writer-director James Vanderbilt is dumbfounding. Burkett was the sole source of the documents questioning George W. Bush’s 1970s military service—the very documents that Mapes, to her ruin and Rather’s, had put on the air that September. If Burkett isn’t trustworthy, Mapes’s story has no foundation. It means Mapes broadcast information whose provenance she didn’t bother to check in the first place, and when the source turned out to be a nutcase, she shrugged. In this one moment, Truth unknowingly deconstructs itself.
Indeed, the entire movie is so willfully self-deceiving that it amounts to a masterpiece of question-begging. The argument it makes via Mapes’s and Rather’s point of view is essentially this: We know George W. Bush shirked his duties in the Texas Air National Guard. We have the documents to prove it. Oh, the documents are fake? That doesn’t matter—you’re missing the point, which is that we know George W. Bush shirked his duties in the Texas Air National Guard. Why are you bugging us with all of this kibbitzing about how the documents were typed in a computer font that didn’t exist in 1972? Besides (as Rather keeps saying to this day) no one “ever established that the documents were forged.”
Actually, that was established beyond a reasonable doubt. The CBS-commissioned review of the fiasco led by former Attorney General Dick Thornburgh and Associated Press chief Louis Boccardi concluded that it could not say with “absolute certainty” that the documents—which were obviously produced on a 21st-century word processor, not a 1970s typewriter—were forgeries. But that was a mere kindness, like giving a blindfold to a man who is about to be shot. The Thornburgh-Baccardi report made it clear that the documents were overwhelmingly likely to be fakes. Anyway, “there’s a slight chance our story might be true” is not ordinarily regarded as the standard of a professional journalist. The burden of proof was on Rather and Mapes to authenticate the documents before they put them on the air. They not only failed to do this—two of the document experts they spoke to raised red flags—but made only meaningless gestures in the direction of authentication. For instance, in an important and spectacularly misleading scene in Truth, Vanderbilt shows Mapes calling General Bobby Hodges, who held high rank in the Texas Air National Guard when Bush was in it. Mapes runs the documents by him, and he says they accurately reflected the state of mind of Bush’s commander, Lieut. Colonel Jerry Killian. In reality, according to what Hodges told the Thornburgh-Boccardi commission, Mapes did not call him to get his opinion on whether the documents were authentic. She simply read him their contents. He replied, in effect, that if that’s what Killian said, then that must be what he thought. Hodges wasn’t asked to check facts. Mapes was simply tricking him into service as her prop and hoping no one would find out.
Deceiving an audience and hoping it never does any homework is what filmmakers shooting for Oscar glory do all the time. Unfortunately for today’s directors, the historical importance they implicitly claim when campaigning for awards occasionally attracts scrutiny from outside the Hollywood bubble, people who live lives outside of the movies. Invariably this catches the dream merchants off-guard. You mean I’m not allowed to restructure reality to fit my message? Last year a former aide to Lyndon B. Johnson, Joseph Califano, single-handedly destroyed the Oscar chances of Selma when he pointed out, in a Washington Post op-ed, that President Johnson was an ally, not an outfoxed opponent, of Martin Luther King Jr. in the struggle for civil rights. The movie’s director, Ava DuVernay, responded that she didn’t want to muddle her story of black victimization and courage by showing white people in a good light (“I didn’t want to make another white-savior movie”).
Filmmakers turn to history and find it too complicated, or its morals too messy, or even its facts uncongenial, so they alter whatever they wish to alter and hope people don’t notice. This matters, for several reasons: Films have long lifespans, they often create permanent misapprehension in the minds of the young and lazy, and they are at the point of the spear that is the left’s effort to discredit the idea of truth itself, facts proving so vexingly inconvenient to so many of its narratives. Today on campus epithets like “mansplaining” or “whitesplaining” are becoming accepted as reasonable, indeed withering, responses to assertions of fact; it’s a sign that the left will decline to get involved in the niceties of truth and skip straight to ad hominem attacks, with sex and race used as disqualifiers. Already one sees these terms working their way into young-progressive opinion factories such as the New Republic and Think Progress, the recruiting grounds for mainstream media such as the New York Times. Every time we let a fresh instance of progressive Hollywood agitprop seep into the American consciousness unchallenged, we forgo an opportunity to remind the public how many times the left has chosen the wrong side and then lied about what the real issues were.
‘Water Engineers Will Be Its Heroes’
Israel has addressed its challenge of water scarcity. Can it be a model for others?
alf a century ago, the dream of making the deserts bloom with seemingly unlimited supplies of fresh water was promoted by President Dwight D. Eisenhower and the man he had once appointed chairman of the U.S. Atomic Energy Commission, Lewis Strauss. In 1953, Strauss had helped author Eisenhower’s “Atoms for Peace” plan. Fifteen years later, in the aftermath of the Six-Day War of 1967, they proposed another nuclear initiative. This one they called “Water for Peace.” It envisioned the construction of three large-scale nuclear-power plants to desalt seawater—one each for Egypt, Israel, and Jordan. “The sweet water produced by these huge plants would cost not more than 15 cents per 1,000 gallons,” Eisenhower wrote in Reader’s Digest. It would make “the desert lands of this earth bloom for human need” and “promote peace in a deeply troubled area of the world.” Strauss contended the proposal could solve the two main problems troubling the Middle East—a lack of water and the Palestinian refugees—and thereby provide a way out of the “morass in which the powers are floundering.” Despite gaining political support in some important quarters, not least from then-President Lyndon B. Johnson, the proposal went nowhere. That was partly because “the reasoning was naive, to put it mildly,” as the late Malcolm Kerr, one of the country’s leading Arab experts, put it. There was “nothing,” he wrote, “in the atmosphere of the Arab world that was receptive to another grandiose American scheme.” But it also foundered because the economics made no sense—a point that was argued in some detail both in a 1969 study by me and two years earlier in a 1967 study by William E. Hoehn, an economist with whom I worked at the RAND Corporation. Even with extremely optimistic assumptions about critical variables such as plant-utilization rates and interest rates, just the cost of water alone to produce a crop of cotton would have exceeded the gross value of the entire crop. At the time, Israel had more than 30,000 hectares of land producing irrigated cotton. It would have made more sense for Israel to shift to higher-value uses or stop growing cotton altogether rather than producing expensive desalted seawater at such prices.
Our larger concern, however, was less about the economics than about the consequences of encouraging the development of “peaceful” nuclear energy—with all of its potential military dimensions—in an energy-rich region like the Middle East. That would have raised the very risks of nuclear proliferation that we are seeing today, with countries such as Saudi Arabia starting to copy the example of Iran by undertaking ambitious nuclear-energy programs of their own.
True, none of the three recipients of the water envisioned in the Eisenhower-Strauss plan was energy-rich at the time, but the subsidies needed for nuclear desalting could have been more rationally applied to transporting oil and gas from places where it was plentiful. Even today, there is little economic rationale for nuclear power for countries of the Persian Gulf, despite Iran’s claimed need for “peaceful” nuclear energy. (Iran alone burned off or “flared” some 10 billion cubic meters of waste natural gas in 2012, making it third in the world behind Russia and Nigeria.)
In his recent book, Let There Be Water: Israel’s Solution for a Water-Starved World (Thomas Dunne Books, 352 pages), the New York businessman Seth M. Siegel contends that Israel has not only solved its water problems but that it can even become “a model for a world in crisis,” a world in which there is increasing pressure on global water supplies. “Israel,” he says, “not only doesn’t have a water crisis, it has a water surplus. It even exports water to some of its neighbors.”
Israel has indeed made enormous progress in two of the three ways that Siegel highlights. It has substantially reduced consumption of scarce water resources, partly through technical innovations such as drip irrigation and improved metering and—probably more important—through the introduction of realistic pricing. Israel has also increased its supply of usable water through recycling or, to state it more plainly, by processing and reusing sewage. These two measures—conservation and recycling—are indeed measures that can provide “solutions for a water-starved world,” as Siegel’s subtitle suggests.
The question is how definitive these solutions might be. Let There Be Water is the work of an enthusiast, and Siegel’s enthusiasm leads him to overstate his case in several respects. First, restricting water use, particularly by repricing water and recycling sewage for agricultural use, is a difficult measure to implement unless water shortages become acute. And even under such conditions, there’s a problem with scale. For example, even at a time of severe shortage, California agriculture consumes almost four times as much water as the state’s urban water users. That’s a long way from the Israeli model, where agriculture now consumes only one-third of the country’s supply.
Moreover, it is questionable that desalination, the third part of the Israeli formula, will ever be able to provide water for the world in large and affordable quantities. In his study, Water 4.0 (Yale University Press, 352 pages), David Sedlak of the University of California, Berkeley, describes the past 40 years of progress in desalination as the equivalent of moving from the gas-guzzling luxury cars of the 1960s to modern, well-engineered SUVs. The problem is that desalination is a relatively mature technology now and is unlikely to be the subject of breakthroughs that advance it far beyond its current standing. The “laws of physics,” Sedlak writes, “make it unlikely that we will ever fill the desalination highway with a bunch of compact hybrid vehicles.”
When Sedlak speaks about the limitations of the “laws of physics,” he is referring to the Second Law of Thermodynamics. It defines the minimum energy necessary to convert a high-entropy system—in this case, a solution of water and salt—into a lower entropy system—in this case, separated salt and water. The “reverse osmosis” process, which is the basis of Israel’s large-scale desalination program, is more energy-efficient than earlier processes upon which the nuclear desalination proposals of 50 years ago were based, but it is still energy-intensive and hence expensive.
A number of new technologies under development could improve the energy efficiency of desalination.1 But even these new technologies will encounter the minimum energy requirement dictated by the Second Law. And even if they begin to approach the theoretical minimum, that is still unlikely to produce water that is cheap enough for agriculture, absent some breakthrough in energy production or in agricultural technology or a catastrophic increase in the cost of agricultural products.
Sedlak does praise Israel for the advances it has made in reducing the cost of desalinated water through economies of scale, incremental design changes, and utilization of existing infrastructure. But these are the kind of improvements that can be squeezed out of a mature technology, not order-of-magnitude breakthroughs. The problem is that irrigating a single acre of crop land can easily require as much as 600,000 gallons of water. Considering that amount, even Eisenhower’s projected cost of 15 cents per 1,000 gallons becomes prohibitively expensive for agriculture. Israel’s desalination plants, which now provide 17 percent of the country’s water supply, do so, according to Sedlak, at a cost of $1.90 per 1,000 gallons, or more than $600 for an acre-foot. This is to say nothing of the cost of transporting water over long distances to higher elevations. At that price point, desalinated water cannot produce food for a hungry world at affordable prices. Farmers still need inexpensive water.
Let There Be Water tells the story of a remarkable series of strong-willed, visionary individuals who created what Siegel calls a “water-respecting culture” and built the infrastructure and produced the innovations that have enabled Israel to make exceptionally good use of scarce water resources. Among them are well-known national leaders—including two prime ministers, David Ben-Gurion and Levi Eshkol—and two Americans who also played important roles, Walter Clay Lowdermilk and Eric Johnston.
Siegel’s focus is on Herzl’s water engineers, and he has a particular and understandable fascination with what he calls the “unsung heroes,” many of whom were vindicated late in life.Lowdermilk was an American soil scientist sent by the U.S. Department of Agriculture to make a comprehensive survey of the soil of Europe, North Africa, and Palestine in 1938. “Appalled” by the general condition of soil in Palestine, he was also “astonished” by the reclamation efforts of the Zionists, which he called the “most remarkable work” observed in his long journey.
Enamored with the Zionist mission and believing it to be positive for both Arabs and Jews, Lowdermilk published Palestine, Land of Promise in 1944. It appeared at a critical time, when the prevailing British analysis said that the territory of Palestine could only sustain a population of 2 million at most (as compared with the 12 million of today). In contrast, Lowdermilk wrote, “The absorptive capacity of any country…changes with the ability of the population to make maximum use of its land, and to put its economy on a scientific and productive basis.” Reportedly, his book was found open on President Roosevelt’s desk when he died.
This controversy about Palestine’s absorptive capacity explains why Kerr regarded the later Eisenhower-Strauss proposal as so naive. For the Arabs, as well as for the Jews, more water would mean more possibility for Jewish immigration. But what was sauce for the goose, in this case, was definitely not sauce for the gander.
Eric Johnston, a leading Republican and the head of the Motion Picture Association of America, was dispatched by President Eisenhower in 1953 as a special ambassador to find a diplomatic solution for the allocation of the waters of the Jordan River. Johnston concluded that water allocations should be based on the principle of using all available water resources “without undue waste, and that the volume of crops that can be grown in the region should be the paramount criterion of desirability.” Johnston, according to Siegel, was able to get “the water technocrats in each Arab country to recognize his revised plan as the basis for a fair allocation of the Jordan River for each party’s use.” This opened the way for the construction of Israel’s ambitious National Water Carrier, which transports water from the Sea of Galilee to the arid northern Negev.
Siegel traces the role of water in Israel’s history with an anecdotal account that starts not 50 years ago but more than a hundred, when Theodor Herzl engineered a meeting with Germany’s Kaiser Wilhelm II during the latter’s visit to Jerusalem in 1898. “This country needs nothing but water and shade to have a very great future,” Siegel quotes the Kaiser as saying.2 Herzl himself was an enthusiast about developing water resources. In his utopian novel Altneuland, he fantasized that “every drop of water” in his imaginary Jewish homeland would be “exploited for the public good,” and that the “water engineers will be its heroes.”
Indeed, Siegel’s principal focus is on Herzl’s water engineers, and he has a particular and understandable fascination with what he calls the “unsung heroes,” visionary designers and planners, many of whom were dismissed as dreamers early in their careers only to be vindicated late in life, if at all.
The man Siegel describes as “the central character in leading the thinking and planning about Israel’s water” was Simcha Blass, a water engineer from Poland who immigrated in the 1930s. His influence was extraordinary over a wide variety of water initiatives. Using the diversion of Colorado River water to Los Angeles as his model, he conceived and designed the “fantasy plan” that became the National Water Carrier. He discovered the water in the Negev desert that made it possible in 1946 to implement Ben Gurion’s idea of creating 11 new settlements to establish Israel’s claim to what had been a vast and largely empty wasteland. Bass formed a close working relationship in the 1930s with Levi Eshkol to form what became the state-owned water company, Mekorot. And, in the 1950s, a second state-owned company for water planning, TAHAL, was created around him.
When the National Water Carrier became a national project, Blass’s new company was given the planning responsibility. But the task of building it was assigned to his old company, Mekorot. Unhappy that he was not in charge of the whole project, Blass quit his government positions and “went home to wait for the call telling him that he was right after all. That call never came.” When the Water Carrier was officially opened in 1964, Walter Clay Lowdermilk came from the United States as an honored guest. “There is no record,” Siegel writes, “of Simcha Blass having been invited to or attending the ceremonies.”
Blass was not just sitting home sulking. He was pursuing something much more important, an idea he had stumbled upon by chance that has come to be known as “drip irrigation.” Scorned by the experts on the agricultural faculty of the Hebrew University—except for one junior faculty member named Dan Goldberg, who was himself dismissed by his more senior colleagues—Blass went into partnership with Kibbutz Hatzerim, one of those 11 original Negev settlements. Through a series of partnerships with other highly socialist kibbutzim, they created what became Netafim, a large, privately owned company that now dominates the $2.5 billion micro-irrigation market. Netafim has formed partnerships in China, India, and Vietnam among other places, and its products are widely used in the American Southwest. According to Siegel, “Blass lived the rest of his life at a level of comfort not possible on an Israeli government pension.”
Drip irrigation has not only produced huge reductions in agricultural water use but has also vastly increased yields. In one experiment in India, reported on the Netafim website, cotton yields were almost doubled while water use was reduced by 40 percent. Israel has also been a world leader in producing new varieties of crops that require less water or less expensive water.
Israel has made great strides in reducing household water consumption as well. In 2000, the use of dual-flush toilets (an Israeli invention, Siegel was told) were made mandatory, a measure that reduces by half the 35 percent of household water that is consumed by flushing toilets.
When it comes to water recycling, Siegel’s hero is a chemical engineer named Eytan Levy who co-founded “two of the most talked about companies in wastewater treatment.” The first one, Aqwise, has a process that greatly increases the efficiency of the bacteria that are used in secondary sewage treatment. The second one, Emefcy, reduces the volume of sludge created in secondary treatment.
In addition to conservation and recycling, the third area that has been important for Israel’s water management has been desalination. This is not the energy‑intensive steam distillation envisioned in the nuclear projects of a half-century ago. It is based instead on reverse osmosis, a process in which salty water is pushed through a membrane that allows the water molecules, but not the salt molecules, to pass.
Although the concept had been understood for a long time, the big challenge was to produce membranes that could function efficiently. In 1963, two graduate students at UCLA, Sidney Loeb and Srinivasa Sourirajan, and their professor, Samuel Yuster, produced a more porous membrane that could produce freshwater at a rate that was about 10 times faster than any of its predecessors. They demonstrated the concept in a full-size plant in Coalinga, a small farming community in California where water from the local aquifer was too salty to drink.
Loeb was born in Kansas and eventually immigrated to Israel, where he continued pursuing his research in reverse osmosis. Siegel laments Loeb’s failure to get adequate recognition and that he died in 2008 before he could see how “seawater reverse-osmosis desalination would change Israel and the world.” But despite his concern for “unsung heroes,” Siegel makes no mention of the other two Americans, or of advances in reverse osmosis derived from projects in places as diverse as Japan, the Canary Islands, and Australia. On the former point, Siegel is in good company; even the New York Times had to publish a correction this past June for an article that “referred imprecisely” to Sidney Loeb as the “sole inventor” of the reverse-osmosis method. But the impression Siegel creates that reverse osmosis is a largely Israeli development, stemming from Israel’s role as a “start-up nation,” reflects an unfortunate tendency toward boosterism.
Israel has definitely become the world leader in desalination with a network of six coastal desalination plants—the first came on line in Ashkelon in 2007—which together produce more than 500 million cubic meters per year and account for 80 percent of total domestic water use. Siegel quotes Ilan Cohen, a former top aide to two Israeli prime ministers, who describes “desalination and reusing wastewater” as a “paradigm shift.” Cohen says, “Today, we are in a period like the dawn of agriculture. Prehistoric man had to go where the food was. Now, agriculture is an industry. Until recently, we had to go where the water was. But no longer.”
This juicy quote offers a fair representation of the strengths and weaknesses of Siegel’s book. Let There Be Water is a readable account, which is quite an accomplishment for, shall we say, so dry a subject. But for a reader who knows nothing about mundane matters like the cost of water, or its value in various uses, or the allocation of water to those uses, Let There Be Water does not provide the information necessary to assess some of its claims—like, for example, the assertion that Israeli desalination methods can provide affordable water for California. In truth, those methods have yet to make sufficient water available to the West Bank and the Jordan Valley, where the cost of transporting desalinated water from the Mediterranean to higher elevations is substantial.
Israel’s household water-users are subsidizing the country’s use of water for agriculture. A similar subsidy couldn’t possibly sustain an agriculture sector as big as California.As we’ve seen, desalted water is still expensive water. Israel has a population of roughly 8 million, most of whom live near the coast and are connected by an already-existing national water infrastructure in an area of 8,000 square miles. California has nearly 39 million residents spread over more than 160,000 square miles. Not only would desalinated water in California cost $1.90 per thousand gallons, it would still have to be transported to faraway locations at higher elevations. It is simply untrue, in California and in many other places, that agriculture no longer “has to go where the water is.”
Even in Israel, desalinated water is affordable only after the extraordinary efficiencies in both domestic and agricultural use that Siegel describes have been achieved. Although Siegel doesn’t mention this, cotton acreage in Israel has been reduced almost eightfold from its peak in 1985, presumably a result of higher water prices. Overall, agriculture’s share of water use in Israel has declined dramatically, from 80 percent in the 1960s to 48 percent in 2012. Israeli agriculture today uses less than one-third of its potable water supplies, an enormous change in water-consumption patterns.
It’s also important to note that the Israeli government is providing its agricultural sector with an indirect price subsidy. Siegel never says what Israel’s consumers actually pay for water, but he asserts that they all “pay the same price,” whether they live “adjacent to a well” or “on a mountain that requires expensive pumping.” While acknowledging that “this nationally blended price means that not everyone pays their personal real cost for the water they use,” it results, he believes, in “everyone having a common unifying stake in conservation and innovation.” But that means, essentially, that in Israel, household water-users are subsidizing the country’s use of water for agriculture. That may work in Israel, but a similar indirect subsidy couldn’t possibly sustain an agriculture sector that consumes 80 percent of the water supply, as in California.
It would be exciting to think that desalination could provide affordable water anywhere it is needed. But that will be the case only if the meaning of what is “affordable” changes. That will come about only with a revolutionary change in the condition of agriculture—and I don’t mean a technological revolution that would make things better, but a terrifying increase in water scarcity that would make things much worse. Such scarcity would lead to a spiraling of the cost of producing agricultural goods and threaten the agricultural abundance we have come to take for granted. It is only under such conditions that the expense of desalination and the discipline imposed by water conservation and recycling would become both politically feasible and financially sound. Such a crisis may yet afflict us—but fortunately it does not afflict us now. Touting the technological fix of desalination might inadvertently provide an excuse for postponing the difficult choices needed to make better use of the resources we have now.
Making the best use of the resources it has and the technology that can be brought to bear on them is exactly what Israel has done for itself, as Siegel explains in Let There Be Water. Israel deserves to be celebrated for this singular achievement. But it is just that—a singular achievement, with limited application to the United States.

2 But the Kaiser did not, as Siegel also claims, give Herzl “reason to think that he would be an ardent supporter” of creating a Jewish state. In his diary, Herzl describes the Kaiser as non-committal on the larger Zionist project, saying neither “yes nor no.” So much so that Herzl had to buck up his downhearted companions saying, as he records, “that is why I am the leader….I am fearless, and therefore…[a]t difficult moments such as these, I do not despair.” Perhaps it was the good luck of the Zionists that the Kaiser was not more enthusiastic, or they might have leaned toward Germany rather than Britain in World War I, with very different historic consequences.
San Bernardino: The Rush to Non-Judgment
Mediacracy
arried couple slaughters 14 at a holiday party in San Bernardino, California. Assailants flee scene in black SUV. Police give chase and kill the murderers in a shootout. On December 2, America watched these events unfold in silence. The media, however, were not so silent. They were busy trying to make the attack conform to their preferred narrative of right-wing extremism fueling gun violence, and downplaying to the greatest extent possible the role that Islamist ideology played in the killings. Minutes after breaking the news of the shooting, CNN told its viewers the killing spree was happening blocks from a Planned Parenthood facility. Implication: This incident must be related to the previous week’s murder of three people outside a clinic in Colorado. “Planned Parenthood Clinic Across Street from San Bernardino Shooting,” liberal pundit Alan Colmes wrote hurriedly on his website. Except the clinic was actually more than a mile away. And was unaffected. And had never been a target. Which did not stop liberals who believe the worst of pro-lifers from jumping to inane conclusions about the possible identities of the culprits, or from immediately classifying the attack as another bloody episode in America’s tragic “gun culture.” A New York Times editorial sniffed, “There will be post-mortems and an official search for a ‘motive’ for this latest gun atrocity, as if something explicable had happened.” What did the Times think had happened? A spontaneous combustion?
“The one thing we do know,” President Obama told CBS News that evening, “is that we have a pattern now of mass shootings in this country that has no parallel anywhere else in the world.” He added: “We don’t yet know what the motives of the shooters are . . . but what we do know is that there are steps we can take to make Americans safer,” such as implementing gun-control proposals that have no chance in Congress and would have done nothing to prevent the married murderers from obtaining their weapons.
By the morning after the attack, we knew the names of the killers: Syed Farook and Tashfeen Malik. We knew that during the rampage the couple had worn tactical gear, had been armed with multiple weapons, and had left behind explosive devices. We knew, according to the local police chief, that “there had to be some degree of planning that went into this.” We knew Farook and Malik were Muslim, that their murder spree took place weeks after the ISIS attack in Paris, and that terrorism was not being ruled out by the authorities.
We did not need Sherlock Holmes or Hercule Poirot or Father Brown to figure out what was going on. Here was another instantiation of the growing power of Islamist ideology. Islamic terrorism had just struck its worst blow on American soil since 9/11. Yet liberals in the media and in politics did all they could to delay acknowledging exactly this fact.
On December 4, the FBI announced it was treating San Bernardino as an act of terrorism. Malik had pledged allegiance to the Islamic State. A media outlet associated with Islamic State claimed its supporters were responsible for the bloodletting. The FBI had discovered, according to the New York Times, “12 completed pipe bombs and a stockpile of thousands of rounds of ammunition” inside Farook and Malik’s apartment. The duo had “destroyed several electronic devices, including two smashed cellphones found in a trash can near their home, and erased emails.” This was not Charles Whitman in the University of Texas bell tower. This was Nicholas Brody in an episode of Homeland.
And yet the same news stories containing such damning evidence of a terrorist plot that at the very least drew inspiration from overseas also went out of their way to say the motive of the killers was unknown. “The exact motives,” reported the Times, “remain unknown, and law-enforcement officials say the couple had not been suspected of posing a danger.” The Washington Post cautioned, “The incomplete picture of the attackers and their motives reflects the difficulty of detecting and preventing attacks by individuals with few or no substantial connections to overseas terrorist investigations.”
The Daily News, whose front page hysterically likened Farook to National Rifle Association chief Wayne LaPierre, said, “Police are still searching for a motive.” In his weekly radio address, broadcast the morning of December 5, President Obama raised the possibility “that these two attackers were radicalized to commit this act of terror,” but also warned his audience that investigators were still “working to get a full picture” of the attackers’ “motives.”
What details remained to be filled in? An Islamist doesn’t require a motive to attack. The ideology of Islamism is the motive. Searching for a Law and Order–style grievance behind the activities of the Islamic State and its global network of supporters is like asking what motivated the Nazis to barbarism. We know what motivated the Nazis: The fascist belief system of Nazism. It’s the same with Islamism.
When President Obama addressed the nation on Sunday, December 6, he said, finally, that what happened in San Bernardino was “an act of terrorism, designed to kill innocent people.” But he continued in his rush to non-judgment, saying, “So far, we have no evidence that the killers were directed by a terrorist organization overseas, or that they were part of a broader conspiracy here at home.”
Why say such a thing if you’re not sure it’s true? Events had already proven wrong some liberal assumptions about San Bernardino—that it was related to the abortion debate, that we didn’t know the motive, that Farook was a “normal guy” who had been “living the American dream.” Is it so out of the realm of possibility that the Valley jihadists were part of a larger terrorist cell? Simply during the hours I’ve spent writing this piece in early December, CNN has reported that Farook is suspected of planning a 2012 attack with someone other than Malik, and Fox has reported that the FBI is looking into the possibility that Malik came to this country as “an operative.” They don’t mean an operative for a political campaign. What will have been revealed about the killers by the time you finish this sentence?
Obviously the president does not want to get ahead of his skis and say something that turns out to be incorrect. But if I had to identify the motive, so to speak, for his eagerness to associate the attack with other highly publicized examples of gun violence, and his reluctance to identify the ideological motives of the San Bernardino terrorists, and the media’s complicity in both of these tasks, I would point to something more significant than bureaucratic self-preservation.
Admitting that Farook and Malik were motivated by Islamism, and were far from alone in their sympathies, would be to acknowledge, however subtly, that President Obama’s counterterrorism strategy has failed spectacularly—and that the “grievance” theory of terrorism, in which the killers are motivated by more prosaic demands than global Islamic conquest, is seriously flawed.
Identifying the culprits and their theological-political philosophy, and recognizing that like a religious conversion “radicalization” is something done in the company of others and with the ministration of institutions, would be to recognize that the president and his successor and our society at large face some very “hard choices” indeed. And it is precisely this sort of recognition, of course, that is the one thing the left cannot seem to accept.