Commentary Magazine


Posts For: January 2007

All the News That’s Fit to Print?

Despite warnings that it is damaging national security, despite the prospect that it is inviting an unprecedented prosecution under the espionage statutes barring communication of national-defense information, the New York Times presses ahead in its campaign to place our country’s most highly classified military, counterterrorism, and diplomatic secrets on its front page. The string of extremely sensitive leaked information making it into the paper was extended recently when a memorandum by National Security Adviser Stephen Hadley, summarizing difficulties the U.S. faces with Iraq’s prime minister, appeared on page one.

But while avidly disclosing U.S. secrets, how does the Times report on intelligence operations directed against the United States by foreign powers?

Back in June, a Defense Intelligence Agency (DIA) analyst by the name of Ronald Montaperto was convicted on espionage charges. According to the assistant U.S. attorney prosecuting the case, Montaperto had held 60 meetings with Chinese military intelligence officers over two decades and provided them with information bearing “secret” and “top-secret” designations. Despite the gravity of the offense, Montaperto was sentenced to only three months in jail. This stands in striking contrast to other well-known cases. Jonathan Pollard, who passed information to Israel in the 1980’s, is serving out a life sentence. Last January, Larry Franklin, a Defense Department desk officer, was sentenced to twelve years in prison for mishandling classified documents and passing sensitive national-defense information to employees of AIPAC, the American Israel Public Affairs Committee.

There are many mysteries here. One of them is why Montaperto got only a slap on the wrist. One answer is that unlike Pollard and Franklin, he was not a nobody or an outsider but a creature of the establishment. In addition to his work for the DIA, he helped to produce a Council on Foreign Relations study on Chinese nuclear weapons and had many friends in the fraternity of China experts, both in and out of government. The federal judge in the case evidently reduced his sentence on the strength of numerous letters he received from Montaperto’s former colleagues. One of those letters came from the current deputy national-intelligence officer for East Asia, Lonnie Henley. Yesterday came word, from Bill Gertz of the Washington Times, that several months ago Henley received a formal reprimand for writing it.

Another even more intriguing mystery is why, even as the New York Times feels free to compromise one classified program after another, it has kept readers in the dark about the Montaperto matter and Henley’s intervention. The story is already beginning to age, Montaperto will be getting out of prison next month, but his name has yet to be even mentioned in our newspaper of record. One explanation for this silence, easy to demonstrate from their own behavior, is that the editors of the Times do not think the loss of governmental secretswith the single revealing exception of the leak of Valery Plame’s CIA affiliationis of any consequence to national security. It is thanks only to the dogged reporting of Bill Gertz, who has himself been known to publish highly sensitive governmental secrets, that the public is aware of these cases at all. 

To find out about A & O (admission and orientation) programs for a federal prisoner like Ronald N. Montaperto, inmate number 71342-083, click here.

Despite warnings that it is damaging national security, despite the prospect that it is inviting an unprecedented prosecution under the espionage statutes barring communication of national-defense information, the New York Times presses ahead in its campaign to place our country’s most highly classified military, counterterrorism, and diplomatic secrets on its front page. The string of extremely sensitive leaked information making it into the paper was extended recently when a memorandum by National Security Adviser Stephen Hadley, summarizing difficulties the U.S. faces with Iraq’s prime minister, appeared on page one.

But while avidly disclosing U.S. secrets, how does the Times report on intelligence operations directed against the United States by foreign powers?

Back in June, a Defense Intelligence Agency (DIA) analyst by the name of Ronald Montaperto was convicted on espionage charges. According to the assistant U.S. attorney prosecuting the case, Montaperto had held 60 meetings with Chinese military intelligence officers over two decades and provided them with information bearing “secret” and “top-secret” designations. Despite the gravity of the offense, Montaperto was sentenced to only three months in jail. This stands in striking contrast to other well-known cases. Jonathan Pollard, who passed information to Israel in the 1980’s, is serving out a life sentence. Last January, Larry Franklin, a Defense Department desk officer, was sentenced to twelve years in prison for mishandling classified documents and passing sensitive national-defense information to employees of AIPAC, the American Israel Public Affairs Committee.

There are many mysteries here. One of them is why Montaperto got only a slap on the wrist. One answer is that unlike Pollard and Franklin, he was not a nobody or an outsider but a creature of the establishment. In addition to his work for the DIA, he helped to produce a Council on Foreign Relations study on Chinese nuclear weapons and had many friends in the fraternity of China experts, both in and out of government. The federal judge in the case evidently reduced his sentence on the strength of numerous letters he received from Montaperto’s former colleagues. One of those letters came from the current deputy national-intelligence officer for East Asia, Lonnie Henley. Yesterday came word, from Bill Gertz of the Washington Times, that several months ago Henley received a formal reprimand for writing it.

Another even more intriguing mystery is why, even as the New York Times feels free to compromise one classified program after another, it has kept readers in the dark about the Montaperto matter and Henley’s intervention. The story is already beginning to age, Montaperto will be getting out of prison next month, but his name has yet to be even mentioned in our newspaper of record. One explanation for this silence, easy to demonstrate from their own behavior, is that the editors of the Times do not think the loss of governmental secretswith the single revealing exception of the leak of Valery Plame’s CIA affiliationis of any consequence to national security. It is thanks only to the dogged reporting of Bill Gertz, who has himself been known to publish highly sensitive governmental secrets, that the public is aware of these cases at all. 

To find out about A & O (admission and orientation) programs for a federal prisoner like Ronald N. Montaperto, inmate number 71342-083, click here.

Read Less

Carter’s Lies

“This is the first time I’ve ever been called a liar,” said former President Jimmy Carter during his much-ballyhooed foray into the lion’s den of Brandeis University this week.

This, of course, is a lie.

The most talked-about article to appear during the 1976 presidential campaign was “Jimmy Carter’s Pathetic Lies.” Appearing in Harper’s, it contained Steven Brill’s account of several days spent accompanying Carter on the campaign trail, in the course of which Brill discovered that most of what Carter told audiences about himself was simply false.

Invariably Carter introduced himself as a “nuclear physicist and a peanut farmer.” He was neither: he held only a bachelor’s degree, and he owned a peanut warehouse. He invited listeners to write to him. “Just put ‘Jimmy Carter, Plains, Georgia’ on the envelope, and I’ll get it. I open every letter myself and read them all.” But Carter’s press secretary admitted to Brill that all mail so addressed was forwarded to the campaign staff in Atlanta. Carter boasted that at the completion of his term as governor he had left Georgia with a budget surplus of $200 million, but Brill discovered that the true amount was $43 million, which was all that remained of a $91 million surplus Carter had inherited when he took office. Carter described an innovative program he had pioneered, employing welfare mothers to care for the mentally handicapped. “You should see them bathing and feeding the retarded children. They’re the best workers we have in the state government,” he enthused to audiences. But there was no way to see them–they did not, in fact, exist, as Brill learned from state officials. “I guess he was mistaken,” conceded Carter’s press secretary. Brill’s piece contained much more in this vein.

In Palestine: Peace Not Apartheid, his recent book on the Middle East, Carter repeats the egregious lie that he set out twenty-odd years ago in his previous book on the subject–namely, that in the 1967 war Israel preemptively attacked Jordan. This is no small matter: it was from Jordan that Israel took the West Bank, the focus of most of today’s controversy. But the record is abundantly clear that while preemptively attacking Egypt and Syria, Israel pleaded with Jordan (through American intermediaries) to stay out of the fray. King Hussein felt he could not do that, so he ordered his forces to attack Israel, and the Israelis fought back. Both Carter’s old book and the new one are replete with countless other outright lies as well as less outright ones (as others have pointed out).

Whatever the subject, Carter makes a specialty of exploiting grammatical ambiguities to leave listeners or readers with the impression that he has said one thing, while a precise examination of his words shows them to mean something else. In a 2003 op-ed in USA Today on the North Korean nuclear crisis, he wrote: “There must be verifiable assurances that prevent North Korea from becoming a threatening nuclear power.” The average reader might think that the word “threatening” is merely descriptive. But, in fact, Carter had fought to allow Pyongyang to have some nuclear weapons, because he believed that was the price of an agreement. The word “threatening,” as Carter used it, actually meant that North Korea could have some nuclear weapons–but not so many as to be “threatening.”

This raises an obvious question: how many nukes, exactly, would that be? Carter hasn’t told us yet, but if he ever does, make sure to read his words carefully.

“This is the first time I’ve ever been called a liar,” said former President Jimmy Carter during his much-ballyhooed foray into the lion’s den of Brandeis University this week.

This, of course, is a lie.

The most talked-about article to appear during the 1976 presidential campaign was “Jimmy Carter’s Pathetic Lies.” Appearing in Harper’s, it contained Steven Brill’s account of several days spent accompanying Carter on the campaign trail, in the course of which Brill discovered that most of what Carter told audiences about himself was simply false.

Invariably Carter introduced himself as a “nuclear physicist and a peanut farmer.” He was neither: he held only a bachelor’s degree, and he owned a peanut warehouse. He invited listeners to write to him. “Just put ‘Jimmy Carter, Plains, Georgia’ on the envelope, and I’ll get it. I open every letter myself and read them all.” But Carter’s press secretary admitted to Brill that all mail so addressed was forwarded to the campaign staff in Atlanta. Carter boasted that at the completion of his term as governor he had left Georgia with a budget surplus of $200 million, but Brill discovered that the true amount was $43 million, which was all that remained of a $91 million surplus Carter had inherited when he took office. Carter described an innovative program he had pioneered, employing welfare mothers to care for the mentally handicapped. “You should see them bathing and feeding the retarded children. They’re the best workers we have in the state government,” he enthused to audiences. But there was no way to see them–they did not, in fact, exist, as Brill learned from state officials. “I guess he was mistaken,” conceded Carter’s press secretary. Brill’s piece contained much more in this vein.

In Palestine: Peace Not Apartheid, his recent book on the Middle East, Carter repeats the egregious lie that he set out twenty-odd years ago in his previous book on the subject–namely, that in the 1967 war Israel preemptively attacked Jordan. This is no small matter: it was from Jordan that Israel took the West Bank, the focus of most of today’s controversy. But the record is abundantly clear that while preemptively attacking Egypt and Syria, Israel pleaded with Jordan (through American intermediaries) to stay out of the fray. King Hussein felt he could not do that, so he ordered his forces to attack Israel, and the Israelis fought back. Both Carter’s old book and the new one are replete with countless other outright lies as well as less outright ones (as others have pointed out).

Whatever the subject, Carter makes a specialty of exploiting grammatical ambiguities to leave listeners or readers with the impression that he has said one thing, while a precise examination of his words shows them to mean something else. In a 2003 op-ed in USA Today on the North Korean nuclear crisis, he wrote: “There must be verifiable assurances that prevent North Korea from becoming a threatening nuclear power.” The average reader might think that the word “threatening” is merely descriptive. But, in fact, Carter had fought to allow Pyongyang to have some nuclear weapons, because he believed that was the price of an agreement. The word “threatening,” as Carter used it, actually meant that North Korea could have some nuclear weapons–but not so many as to be “threatening.”

This raises an obvious question: how many nukes, exactly, would that be? Carter hasn’t told us yet, but if he ever does, make sure to read his words carefully.

Read Less

Too Many Hats in the Ring

In a recent column, satirist Andy Borowitz suggests that in 2008, there will be more presidential candidates than there are voters:

With politicians throwing their hats in the ring at a torrid pace, by November 2008, one out of every two Americans is expected to be running for the nation’s highest office—an extraordinary figure by any measure.

Why so many candidates? Because the barriers to entry are so low and the psychic rewards so great. Today the presidential campaign has become a kind of Davos for the political set: a seemingly endless opportunity for opining on energy, education, and health care, pontificating about the future, rubbing elbows with high-profile journalists, and being taken very, very seriously. No other avenue of American life grants so much attention and national exposure to individuals of such modest accomplishments. How else can one explain the presidential campaigns of Congressman Duncan Hunter, Iowa Governor Tom Vilsack, or former Virginia Governor Jim Gilmore?

All this places the media in a dilemma: how can they cover so many candidates without appearing biased? Because they fear being accused of pre-emptively anointing a front-runner, the media use a spurious evenhandedness in discussing the growing roster of aspirants. As the passing weeks have launched the presidential ambitions of one mediocre pol after another, one wonders whether each will be accorded the full road-to-the-White-House treatment: extended excerpts of his speeches on the Jim Lehrer Newshour; a one-on-one interview with Marvin Kalb at the Kennedy School; cinema verité footage of his New Hampshire town meetings on C-SPAN, etc.

While editorial writers are loath to admit it, there is, in the end, only one way to separate the presidential wheat from the chaff: fundraising. Asking someone for $2,000 to support your candidacy—or, more accurately, asking someone to find 20 such donors—is still the best test of a candidate’s national viability. This point seems to be utterly lost on those public watchdogs who insist that there is too much money in our campaigns. Fred Wertheimer, the founder of campaign-finance watchdog Common Cause and now president of Democracy21, held a press conference on Wednesday to bemoan the fact that Hillary Clinton may forgo public funding of her campaign. Public funding, Wertheimer contends, gives “serious candidates” a chance to be heard.

Yet surely any “serious” candidate ought to be interesting enough to attract serious money, or at least enough to mount a competitive campaign. The alternative is to rely on public financing, the favorite hobby horse of Wertheimer, former Presidential candidate Bill Bradley, the New York Times, the Center for Responsive Politics, and many other self-appointed guardians of good government. It is remarkable that this argument can still be made with a straight face: do we really want a taxpayer-funded system that enables and indeed fosters the narcissistic electoral pursuits of Dennis Kucinich?

In a recent column, satirist Andy Borowitz suggests that in 2008, there will be more presidential candidates than there are voters:

With politicians throwing their hats in the ring at a torrid pace, by November 2008, one out of every two Americans is expected to be running for the nation’s highest office—an extraordinary figure by any measure.

Why so many candidates? Because the barriers to entry are so low and the psychic rewards so great. Today the presidential campaign has become a kind of Davos for the political set: a seemingly endless opportunity for opining on energy, education, and health care, pontificating about the future, rubbing elbows with high-profile journalists, and being taken very, very seriously. No other avenue of American life grants so much attention and national exposure to individuals of such modest accomplishments. How else can one explain the presidential campaigns of Congressman Duncan Hunter, Iowa Governor Tom Vilsack, or former Virginia Governor Jim Gilmore?

All this places the media in a dilemma: how can they cover so many candidates without appearing biased? Because they fear being accused of pre-emptively anointing a front-runner, the media use a spurious evenhandedness in discussing the growing roster of aspirants. As the passing weeks have launched the presidential ambitions of one mediocre pol after another, one wonders whether each will be accorded the full road-to-the-White-House treatment: extended excerpts of his speeches on the Jim Lehrer Newshour; a one-on-one interview with Marvin Kalb at the Kennedy School; cinema verité footage of his New Hampshire town meetings on C-SPAN, etc.

While editorial writers are loath to admit it, there is, in the end, only one way to separate the presidential wheat from the chaff: fundraising. Asking someone for $2,000 to support your candidacy—or, more accurately, asking someone to find 20 such donors—is still the best test of a candidate’s national viability. This point seems to be utterly lost on those public watchdogs who insist that there is too much money in our campaigns. Fred Wertheimer, the founder of campaign-finance watchdog Common Cause and now president of Democracy21, held a press conference on Wednesday to bemoan the fact that Hillary Clinton may forgo public funding of her campaign. Public funding, Wertheimer contends, gives “serious candidates” a chance to be heard.

Yet surely any “serious” candidate ought to be interesting enough to attract serious money, or at least enough to mount a competitive campaign. The alternative is to rely on public financing, the favorite hobby horse of Wertheimer, former Presidential candidate Bill Bradley, the New York Times, the Center for Responsive Politics, and many other self-appointed guardians of good government. It is remarkable that this argument can still be made with a straight face: do we really want a taxpayer-funded system that enables and indeed fosters the narcissistic electoral pursuits of Dennis Kucinich?

Read Less

Antique Courage

I love old books and I (mostly) like the gentle souls who sell them. Yet an elderly, convalescent antiquarian bookseller seems an improbable hero in the war on terror. Arthur Burton-Garnett deserves a medal for giving chase to a suicide bomber who had just failed to kill him and his fellow-passengers on a London subway train.

This is one of several extraordinary stories to emerge from the trial of six men who are accused of attempting a repetition of the 7/7 London bombings two weeks later, on July 21, 2005. According to the prosecution, Ramzi Mohammed tried to detonate his bomb as the train travelled between Stockwell and Oval stations just south of the River Thames.

Mohammed is alleged to have turned so that his backpack, containing a home-made bomb, pointed towards a young mother, Nadia Baro, with her nine-month-old baby in a buggy. He then set off his device, but only the detonator exploded, sounding like a large firecracker. Most of the passengers fled to the next carriage, but Mrs. Baro was left behind. A middle-aged off-duty fireman, Angus Campbell, helped her to get away. As the train drew into Oval station, he told the driver on the intercom: “Don’t open the doors!” Even though this was only two weeks after the carnage of 7/7, the driver ignored this request, and Mohammed ran out of the train. A retired engineer, George Brawley, tried to grab him, but he broke free.

At this point, Mr. Burton-Garnett decided to give chase. Though the fugitive was a third of his age and highly dangerous, the unarmed septuagenarian bookseller was fearless. He ran up the escalator after Mohammed, shouting: “Stop that man! Get the police!” In his own words, Mr. Burton-Garnett “tore after him but he was about nine or ten stair treads ahead of me. Halfway up I sort of ran out of steam. I was just recovering from a gall-bladder operation, otherwise I think I might have been a bit faster.”

What would have happened if he had caught Mohammed probably didn’t occur to him. It is surely significant that a man of Mr. Burton-Garnett’s age and health would be so careless of his own safety. Mr. Brawley and Mr. Campbell were also older men. Younger people are much less likely to feel an obligation to intervene in such situations, having been warned by the police and brought up by their parents not to do so. They learn that the “streetwise” thing to do if they see a crime being committed is to run away. I do not wish to disparage our youth: after all, the majority of troops in Iraq and Afghanistan are in their teens or twenties. And plenty of young civilians are not afraid to have a go at criminals and terrorists. But in doing so they go against the grain of an overprotective culture.

The inspiring message of the passengers on Flight 93, who prevented an even worse catastrophe on 9/11, is that a war in which the suicide bomber is a key weapon can only be won if civilians defy regulations and rely on their own initiative.

Next time I open an old book, I shall think of Mr. Burton-Garnett. He may belong in the gallery of English eccentrics, but he is a hero nevertheless. Where manliness is mocked and cowardice is institutionalized, you need to be eccentric to be brave. There is something both comic and moving about the image of an erudite gentleman, more accustomed to leafing through old folios, in hot pursuit of an alleged suicide bomber who thought nothing of killing a mother and child in cold blood.

I love old books and I (mostly) like the gentle souls who sell them. Yet an elderly, convalescent antiquarian bookseller seems an improbable hero in the war on terror. Arthur Burton-Garnett deserves a medal for giving chase to a suicide bomber who had just failed to kill him and his fellow-passengers on a London subway train.

This is one of several extraordinary stories to emerge from the trial of six men who are accused of attempting a repetition of the 7/7 London bombings two weeks later, on July 21, 2005. According to the prosecution, Ramzi Mohammed tried to detonate his bomb as the train travelled between Stockwell and Oval stations just south of the River Thames.

Mohammed is alleged to have turned so that his backpack, containing a home-made bomb, pointed towards a young mother, Nadia Baro, with her nine-month-old baby in a buggy. He then set off his device, but only the detonator exploded, sounding like a large firecracker. Most of the passengers fled to the next carriage, but Mrs. Baro was left behind. A middle-aged off-duty fireman, Angus Campbell, helped her to get away. As the train drew into Oval station, he told the driver on the intercom: “Don’t open the doors!” Even though this was only two weeks after the carnage of 7/7, the driver ignored this request, and Mohammed ran out of the train. A retired engineer, George Brawley, tried to grab him, but he broke free.

At this point, Mr. Burton-Garnett decided to give chase. Though the fugitive was a third of his age and highly dangerous, the unarmed septuagenarian bookseller was fearless. He ran up the escalator after Mohammed, shouting: “Stop that man! Get the police!” In his own words, Mr. Burton-Garnett “tore after him but he was about nine or ten stair treads ahead of me. Halfway up I sort of ran out of steam. I was just recovering from a gall-bladder operation, otherwise I think I might have been a bit faster.”

What would have happened if he had caught Mohammed probably didn’t occur to him. It is surely significant that a man of Mr. Burton-Garnett’s age and health would be so careless of his own safety. Mr. Brawley and Mr. Campbell were also older men. Younger people are much less likely to feel an obligation to intervene in such situations, having been warned by the police and brought up by their parents not to do so. They learn that the “streetwise” thing to do if they see a crime being committed is to run away. I do not wish to disparage our youth: after all, the majority of troops in Iraq and Afghanistan are in their teens or twenties. And plenty of young civilians are not afraid to have a go at criminals and terrorists. But in doing so they go against the grain of an overprotective culture.

The inspiring message of the passengers on Flight 93, who prevented an even worse catastrophe on 9/11, is that a war in which the suicide bomber is a key weapon can only be won if civilians defy regulations and rely on their own initiative.

Next time I open an old book, I shall think of Mr. Burton-Garnett. He may belong in the gallery of English eccentrics, but he is a hero nevertheless. Where manliness is mocked and cowardice is institutionalized, you need to be eccentric to be brave. There is something both comic and moving about the image of an erudite gentleman, more accustomed to leafing through old folios, in hot pursuit of an alleged suicide bomber who thought nothing of killing a mother and child in cold blood.

Read Less

Citizen Hussein at the Gallows

Much has been said (including by contentions blogger Emanuele Ottolenghi) about the execution of Saddam Hussein, including vocal criticism of the widely disseminated photographs and videos of the event. These images are indeed disquieting—and not merely because of the noose and the hooded executioners they show in such detail. Their uncomfortable closeness to the proceedings themselves, placing us upon the scaffold rather than at a safe remove, also disturbs the viewer. When news got out that vicious taunting accompanied the hanging, it seemed oddly understandable: although despicable, these taunts were the only note of human feeling in the grim efficiency of the proceeding.

The remorselessly bureaucratic character of Hussein’s hanging, conducted behind closed doors and without ceremony, was presumably intended to avoid creating a cult of martyrdom around him. There is a long history of such attempts at avoidance, as is shown in a recent essay by the art historian Samuel Edgerton, entitled “When Even Artists Encouraged the Death Penalty.”

When Charles I was executed in England in 1649, the proceedings were carefully choreographed. The method of execution then current in England was a medieval one: noblemen were dispatched as they knelt upright by an executioner standing behind them—letting them die in an attitude of dignified prayer. But Charles was made to place his head upon a block, which had been whittled down so low that the king had to prostrate himself, removing him from the sight of the volatile populace of London. Not until woodcuts of the scene were rushed into distribution did the public know how the king had comported himself.

The public execution of France’s Louis XVI, although savage by modern standards, prefigures the bureaucratic capital punishment of modern times. The guillotine was created as a rational form of execution, meant to be as painless as possible, and to seem impersonal. In its mechanical regularity and looming presence the public was meant to see the rationality and power of the nascent modern state. And so the vanquished king was executed as Citizen Capet*, a public demonstration of the final subjugation of all to the state.

Our era is more squeamish about such displays: think of the hygenic, secretive conditions under which criminals are executed in this country. But Hussein was no mere criminal; he was a genocidal tyrant, and his execution deserves a place in public memory, even at the risk of making a martyr of him. If the images of Hussein’s grim, efficient execution are disturbing, we might consider how disturbing it would be if there were no images at all.

* Correction: “Citizen Capet” originally read “Citizen Capulet.” 

Much has been said (including by contentions blogger Emanuele Ottolenghi) about the execution of Saddam Hussein, including vocal criticism of the widely disseminated photographs and videos of the event. These images are indeed disquieting—and not merely because of the noose and the hooded executioners they show in such detail. Their uncomfortable closeness to the proceedings themselves, placing us upon the scaffold rather than at a safe remove, also disturbs the viewer. When news got out that vicious taunting accompanied the hanging, it seemed oddly understandable: although despicable, these taunts were the only note of human feeling in the grim efficiency of the proceeding.

The remorselessly bureaucratic character of Hussein’s hanging, conducted behind closed doors and without ceremony, was presumably intended to avoid creating a cult of martyrdom around him. There is a long history of such attempts at avoidance, as is shown in a recent essay by the art historian Samuel Edgerton, entitled “When Even Artists Encouraged the Death Penalty.”

When Charles I was executed in England in 1649, the proceedings were carefully choreographed. The method of execution then current in England was a medieval one: noblemen were dispatched as they knelt upright by an executioner standing behind them—letting them die in an attitude of dignified prayer. But Charles was made to place his head upon a block, which had been whittled down so low that the king had to prostrate himself, removing him from the sight of the volatile populace of London. Not until woodcuts of the scene were rushed into distribution did the public know how the king had comported himself.

The public execution of France’s Louis XVI, although savage by modern standards, prefigures the bureaucratic capital punishment of modern times. The guillotine was created as a rational form of execution, meant to be as painless as possible, and to seem impersonal. In its mechanical regularity and looming presence the public was meant to see the rationality and power of the nascent modern state. And so the vanquished king was executed as Citizen Capet*, a public demonstration of the final subjugation of all to the state.

Our era is more squeamish about such displays: think of the hygenic, secretive conditions under which criminals are executed in this country. But Hussein was no mere criminal; he was a genocidal tyrant, and his execution deserves a place in public memory, even at the risk of making a martyr of him. If the images of Hussein’s grim, efficient execution are disturbing, we might consider how disturbing it would be if there were no images at all.

* Correction: “Citizen Capet” originally read “Citizen Capulet.” 

Read Less

Common Ground with Syria?

On the heels of my last post about Israeli-Syrian negotiations comes Michael Oren’s op-ed in the January 24 New York Times, “What if Israel and Syria Find Common Ground?” In this opinion piece, Oren—author of the best-selling Six Days of War (reviewed in COMMENTARY by Victor Davis Hanson) and the new Power, Faith, and Fantasy: America in the Middle East, 1776 to the Present (reviewed in January’s COMMENTARY by me)—recommends that Israel make peace with Syria even if this means “forfeiting the Golan Heights” and initiating “a clash between Israel and Washington.”

Frankly, Oren, who has always been something of a hawk when it comes to Israeli-Arab relations, startles me. The surprise lies not so much in his readiness for Israel to “clash” with Washington, even though this is nothing to be made light of. Rather, it lies in his endorsing the argument that it is worth giving in to Syrian demands on the Golan because, as he put it in the Times, this would “invariably provide for the cessation of Syrian aid to Hamas and Hizballah.” More “crucial still,” he writes, “by detaching Syria from Iran’s orbit,” such a concession would enable Israel to “address the Iranian nuclear threat—perhaps by military means—without fear of retribution from Syrian ground forces and missiles.”

Let’s assume for the moment that Oren is right and that Syria can be bribed into ditching Hizballah, Iran, and Hamas by giving it back the Golan. Does this mean that the Israeli air force can then attack Iran’s nuclear installations with impunity? Hardly. Even if Israel does not have to worry about Syrian missiles and ground forces, it will still have to worry about Hizballah and Iranian missiles, as well as about the possible failure of its air attack, not to mention strongly condemnatory international reaction. And what if the United States attacks Iran first, in which case Syria would be highly unlikely to get involved even without the gift of the Golan? And how does Oren know how Syria will behave once it has the Golan back and is sitting on the Sea of Galilee and the cliffs overlooking northern Israel, or what unexpected political developments in Syria (or elsewhere in the Middle East) may take place five or ten years from now, or how the message that Israel is ready to cede territory for short-term gains will be interpreted by the Palestinians and the Arab world?

Land is an unchanging asset; it never loses its value. Political developments are contingent and unpredictable. To give up the unchanging for the contingent and the certain for the unpredictable is never a good idea, quite apart from the strong historical, legal, and moral claim that Israel has on the Golan. It’s not fear of clashing with Washington that should keep it from surrendering the Heights, but fear of compromising its own most vital interests.

On the heels of my last post about Israeli-Syrian negotiations comes Michael Oren’s op-ed in the January 24 New York Times, “What if Israel and Syria Find Common Ground?” In this opinion piece, Oren—author of the best-selling Six Days of War (reviewed in COMMENTARY by Victor Davis Hanson) and the new Power, Faith, and Fantasy: America in the Middle East, 1776 to the Present (reviewed in January’s COMMENTARY by me)—recommends that Israel make peace with Syria even if this means “forfeiting the Golan Heights” and initiating “a clash between Israel and Washington.”

Frankly, Oren, who has always been something of a hawk when it comes to Israeli-Arab relations, startles me. The surprise lies not so much in his readiness for Israel to “clash” with Washington, even though this is nothing to be made light of. Rather, it lies in his endorsing the argument that it is worth giving in to Syrian demands on the Golan because, as he put it in the Times, this would “invariably provide for the cessation of Syrian aid to Hamas and Hizballah.” More “crucial still,” he writes, “by detaching Syria from Iran’s orbit,” such a concession would enable Israel to “address the Iranian nuclear threat—perhaps by military means—without fear of retribution from Syrian ground forces and missiles.”

Let’s assume for the moment that Oren is right and that Syria can be bribed into ditching Hizballah, Iran, and Hamas by giving it back the Golan. Does this mean that the Israeli air force can then attack Iran’s nuclear installations with impunity? Hardly. Even if Israel does not have to worry about Syrian missiles and ground forces, it will still have to worry about Hizballah and Iranian missiles, as well as about the possible failure of its air attack, not to mention strongly condemnatory international reaction. And what if the United States attacks Iran first, in which case Syria would be highly unlikely to get involved even without the gift of the Golan? And how does Oren know how Syria will behave once it has the Golan back and is sitting on the Sea of Galilee and the cliffs overlooking northern Israel, or what unexpected political developments in Syria (or elsewhere in the Middle East) may take place five or ten years from now, or how the message that Israel is ready to cede territory for short-term gains will be interpreted by the Palestinians and the Arab world?

Land is an unchanging asset; it never loses its value. Political developments are contingent and unpredictable. To give up the unchanging for the contingent and the certain for the unpredictable is never a good idea, quite apart from the strong historical, legal, and moral claim that Israel has on the Golan. It’s not fear of clashing with Washington that should keep it from surrendering the Heights, but fear of compromising its own most vital interests.

Read Less

It’s a Lemann

The Scooter Libby case is very complicated. Nicholas Lemann, the dean of the Columbia University School of Journalism, has now offered a brief account of its origins in the New Yorker that makes it even more so.

Lemann explains that during the run-up to the second Gulf war, the White House, in the grip of an “obsession with finding hard evidence for what it already believes,” came up dry in its search for weapons of mass destruction in Iraq and thereafter “the search had to be conducted with a little more creativity.” Toward that end, writes Lemann,

the White House dispatched former Ambassador Joseph Wilson to Niger, in February of 2002, to find proof that the country had shipped yellowcake uranium to Iraq. Wilson not only came up empty-handed; he said so publicly, in a Times op-ed piece that he published five months later. The administration then went on another search for evidence—the kind that could be used to discredit Wilson—and began disseminating it, off the record, to a few trusted reporters.

The origins of Wilson’s trips to Niger were examined exhaustively in 2004 by the Senate Intelligence Committee in its report on the “U.S. Intelligence Community’s Prewar Intelligence Assessments on Iraq.” Although parts of the report remain classified, the unclassified sections are quite plain. They state that interviews and documents provided to the Committee by officials of the CIA’s Counterproliferation Division (CPD)

indicate that [Wilson’s] wife, a CPD employee, suggested his name for the trip. The CPD reports officer told Committee staff that the former ambassador’s wife “offered up his name” and a memorandum to the Deputy Chief of the CPD on February 12, 2002, from the former ambassador’s wife says, “my husband has good relations with both the PM [prime minister] and the former Minister of Mines (not to mention lots of French contacts), both of whom could possibly shed light on this sort of activity.” This was just one day before CPD sent a cable [DELETED] requesting concurrence with CPD’s idea to send the former ambassador to Niger. . .The former ambassador’s wife told Committee staff that when CPD decided it would like to send the former ambassador to Niger, she approached her husband on behalf of the CIA.”

The report goes on to make clear that the White House was completely in the dark about the CIA plan. At no point did it intervene to send Wilson anywhere or even have knowledge that a mission to Niger by the former ambassador was under way. Even Patrick Fitzgerald’s indictment of Libby confirms this, stating unequivocally that “the CIA decided on its own initiative to send Wilson to the country of Niger to investigate allegations involving Iraqi efforts to acquire uranium yellowcake.”

Lemann concludes that the “problem with the Bush administration is not that it is uninterested in hard facts” but resides rather in “the way in which the administration goes about marshalling those facts.”

But what exactly are the facts and with what kind of care, to turn things around, has Lemann himself marshaled them? It will be a most interesting twist if Lemann, or the New Yorker’s highly vaunted fact checkers, have information contradicting the Senate report and Fitzgerald’s indictment on this central point. My bet is that they do not. Rather, in striving to demonstrate that the Bush administration was in the grip of an “obsession” about weapons of mass destruction, they appear to be in the grip of an obsession of their own. Pursuing it evidently demands a bit of “creativity.”

To contribute to the considerable costs of defending Scooter Libby, send a check to:

Libby Legal Defense Trust
2100 M Street, NW Suite 170-362
Washington, DC 20037-1233 

To contribute to the even more considerable costs of running the Columbia University School of Journalism, send a check to:

The Columbia University School of Journalism
2950 Broadway
New York, NY 10027 

 

The Scooter Libby case is very complicated. Nicholas Lemann, the dean of the Columbia University School of Journalism, has now offered a brief account of its origins in the New Yorker that makes it even more so.

Lemann explains that during the run-up to the second Gulf war, the White House, in the grip of an “obsession with finding hard evidence for what it already believes,” came up dry in its search for weapons of mass destruction in Iraq and thereafter “the search had to be conducted with a little more creativity.” Toward that end, writes Lemann,

the White House dispatched former Ambassador Joseph Wilson to Niger, in February of 2002, to find proof that the country had shipped yellowcake uranium to Iraq. Wilson not only came up empty-handed; he said so publicly, in a Times op-ed piece that he published five months later. The administration then went on another search for evidence—the kind that could be used to discredit Wilson—and began disseminating it, off the record, to a few trusted reporters.

The origins of Wilson’s trips to Niger were examined exhaustively in 2004 by the Senate Intelligence Committee in its report on the “U.S. Intelligence Community’s Prewar Intelligence Assessments on Iraq.” Although parts of the report remain classified, the unclassified sections are quite plain. They state that interviews and documents provided to the Committee by officials of the CIA’s Counterproliferation Division (CPD)

indicate that [Wilson’s] wife, a CPD employee, suggested his name for the trip. The CPD reports officer told Committee staff that the former ambassador’s wife “offered up his name” and a memorandum to the Deputy Chief of the CPD on February 12, 2002, from the former ambassador’s wife says, “my husband has good relations with both the PM [prime minister] and the former Minister of Mines (not to mention lots of French contacts), both of whom could possibly shed light on this sort of activity.” This was just one day before CPD sent a cable [DELETED] requesting concurrence with CPD’s idea to send the former ambassador to Niger. . .The former ambassador’s wife told Committee staff that when CPD decided it would like to send the former ambassador to Niger, she approached her husband on behalf of the CIA.”

The report goes on to make clear that the White House was completely in the dark about the CIA plan. At no point did it intervene to send Wilson anywhere or even have knowledge that a mission to Niger by the former ambassador was under way. Even Patrick Fitzgerald’s indictment of Libby confirms this, stating unequivocally that “the CIA decided on its own initiative to send Wilson to the country of Niger to investigate allegations involving Iraqi efforts to acquire uranium yellowcake.”

Lemann concludes that the “problem with the Bush administration is not that it is uninterested in hard facts” but resides rather in “the way in which the administration goes about marshalling those facts.”

But what exactly are the facts and with what kind of care, to turn things around, has Lemann himself marshaled them? It will be a most interesting twist if Lemann, or the New Yorker’s highly vaunted fact checkers, have information contradicting the Senate report and Fitzgerald’s indictment on this central point. My bet is that they do not. Rather, in striving to demonstrate that the Bush administration was in the grip of an “obsession” about weapons of mass destruction, they appear to be in the grip of an obsession of their own. Pursuing it evidently demands a bit of “creativity.”

To contribute to the considerable costs of defending Scooter Libby, send a check to:

Libby Legal Defense Trust
2100 M Street, NW Suite 170-362
Washington, DC 20037-1233 

To contribute to the even more considerable costs of running the Columbia University School of Journalism, send a check to:

The Columbia University School of Journalism
2950 Broadway
New York, NY 10027 

 

Read Less

State of the Union (and of Bush)

For once, the mainstream media’s incessant badmouthing of George W. Bush worked for him. In the day-long lead-up to his State of the Union address, news-readers previewed the speech in tones usually reserved for a crashing market or a dying patient. “Bush’s popularity is at an all-time low.” “Bush’s approval ratings are as low as Gerald Ford’s.” “[They’re] approaching Nixonian levels.”

But at the crucial moment, in a hostile chamber, the President delivered a crisp speech in a strong voice, with no fumbling or smirking in sight. Instead of the traditional laundry list of departmental initiatives, Bush limited his domestic policy projects to a solid, attainable few. He knows that the electorate and the new Democratic majority want more action in the domestic arena. Balancing the budget, cutting earmarks, cleaning up entitlements, moving toward energy independence—all of these ideas are non-controversial. His health-care proposals will be going nowhere soon (too much opposition from organized labor, which stands to lose from his plan to tax especially generous health benefits). But immigration, on which the President called for a discussion “without animosity or amnesty,” is a good bet for quick action. Now that he will be working with a Democratic majority that shares many of his views on the issue, GOP support will depend on details.

But the Bush legacy does not depend on the details of health-care policy. It will live or die by our success or failure in Iraq and in the war on terror and Islamofascism. Last night’s speech smartly separated these two struggles, and did an excellent job of mapping out the many, various threats from different branches of Islam.

Discussing Iraq, the President listed the successes of 2005, and acknowledged our enemy’s response in 2006. He memorably said, “Whatever you voted for, you did not vote for failure.” Still, when he spoke of turning our efforts “toward victory,” the Democrats (and a few Republicans) neither applauded nor rose.

George W. Bush knows that the struggle he is waging is not a popularity contest. It is a contest of will, force, and American credibility as the world’s leader. Last night’s speech signaled that he is again in fighting form.

For once, the mainstream media’s incessant badmouthing of George W. Bush worked for him. In the day-long lead-up to his State of the Union address, news-readers previewed the speech in tones usually reserved for a crashing market or a dying patient. “Bush’s popularity is at an all-time low.” “Bush’s approval ratings are as low as Gerald Ford’s.” “[They’re] approaching Nixonian levels.”

But at the crucial moment, in a hostile chamber, the President delivered a crisp speech in a strong voice, with no fumbling or smirking in sight. Instead of the traditional laundry list of departmental initiatives, Bush limited his domestic policy projects to a solid, attainable few. He knows that the electorate and the new Democratic majority want more action in the domestic arena. Balancing the budget, cutting earmarks, cleaning up entitlements, moving toward energy independence—all of these ideas are non-controversial. His health-care proposals will be going nowhere soon (too much opposition from organized labor, which stands to lose from his plan to tax especially generous health benefits). But immigration, on which the President called for a discussion “without animosity or amnesty,” is a good bet for quick action. Now that he will be working with a Democratic majority that shares many of his views on the issue, GOP support will depend on details.

But the Bush legacy does not depend on the details of health-care policy. It will live or die by our success or failure in Iraq and in the war on terror and Islamofascism. Last night’s speech smartly separated these two struggles, and did an excellent job of mapping out the many, various threats from different branches of Islam.

Discussing Iraq, the President listed the successes of 2005, and acknowledged our enemy’s response in 2006. He memorably said, “Whatever you voted for, you did not vote for failure.” Still, when he spoke of turning our efforts “toward victory,” the Democrats (and a few Republicans) neither applauded nor rose.

George W. Bush knows that the struggle he is waging is not a popularity contest. It is a contest of will, force, and American credibility as the world’s leader. Last night’s speech signaled that he is again in fighting form.

Read Less

Oren’s Error

In this morning’s New York Times, Michael B. Oren remarks, in an op-ed entitled “What if Israel and Syria Find Common Ground?,” that in the late 1990′s

American leaders agreed that the Syrian-Israeli track offered a promising alternative to the perennially stalled Israeli-Palestinian talks, and that achieving peace between the Syrian and Israeli enemies would open the door to regional reconciliation.

All that was before Sept. 11, however, and Syria’s inclusion, alongside Iran and North Korea, in President Bush’s “axis of evil.” Once regarded as a possible partner in a Middle East peace process, the Baathist regime of Bashar al-Assad was suddenly viewed as a source of Middle East instability, a state sponsor of terrorist groups and an implacable foe of the United States.

Whatever the analytical virtues of Oren’s op-ed may be, the fact is that George W. Bush did not “include” Syria in the axis of evil, as the text of his speech shows:

[Our goal] is to prevent regimes that sponsor terror from threatening America or our friends and allies with weapons of mass destruction. Some of these regimes have been pretty quiet since September the 11th. But we know their true nature. North Korea is a regime arming with missiles and weapons of mass destruction, while starving its citizens.

Iran aggressively pursues these weapons and exports terror, while an unelected few repress the Iranian people’s hope for freedom.

Iraq continues to flaunt its hostility toward America and to support terror….
States like these, and their terrorist allies, constitute an axis of evil, arming to threaten the peace of the world. By seeking weapons of mass destruction, these regimes pose a grave and growing danger. They could provide these arms to terrorists, giving them the means to match their hatred. They could attack our allies or attempt to blackmail the United States. In any of these cases, the price of indifference would be catastrophic.

In any case, we’re sure our readers will be interested in Hillel Halkin’s very different take on the Golan issue, below.

In this morning’s New York Times, Michael B. Oren remarks, in an op-ed entitled “What if Israel and Syria Find Common Ground?,” that in the late 1990′s

American leaders agreed that the Syrian-Israeli track offered a promising alternative to the perennially stalled Israeli-Palestinian talks, and that achieving peace between the Syrian and Israeli enemies would open the door to regional reconciliation.

All that was before Sept. 11, however, and Syria’s inclusion, alongside Iran and North Korea, in President Bush’s “axis of evil.” Once regarded as a possible partner in a Middle East peace process, the Baathist regime of Bashar al-Assad was suddenly viewed as a source of Middle East instability, a state sponsor of terrorist groups and an implacable foe of the United States.

Whatever the analytical virtues of Oren’s op-ed may be, the fact is that George W. Bush did not “include” Syria in the axis of evil, as the text of his speech shows:

[Our goal] is to prevent regimes that sponsor terror from threatening America or our friends and allies with weapons of mass destruction. Some of these regimes have been pretty quiet since September the 11th. But we know their true nature. North Korea is a regime arming with missiles and weapons of mass destruction, while starving its citizens.

Iran aggressively pursues these weapons and exports terror, while an unelected few repress the Iranian people’s hope for freedom.

Iraq continues to flaunt its hostility toward America and to support terror….
States like these, and their terrorist allies, constitute an axis of evil, arming to threaten the peace of the world. By seeking weapons of mass destruction, these regimes pose a grave and growing danger. They could provide these arms to terrorists, giving them the means to match their hatred. They could attack our allies or attempt to blackmail the United States. In any of these cases, the price of indifference would be catastrophic.

In any case, we’re sure our readers will be interested in Hillel Halkin’s very different take on the Golan issue, below.

Read Less

Law and Order

Did Scooter Libby write a letter to New York Times reporter Judith Miller in prison containing a coded hint that she should back up his story in court? So asks New York Times reporter Neil Lewis, referring to a curious passage in a missive Libby mailed to Miller in September 2005, ostensibly releasing her from her pledge not to reveal him as her source for the identity of CIA operative Valery Plame Wilson, but possibly suggesting something else entirely. “Out West, where you vacation, the aspens will already be turning. They turn in clusters because their roots connect them,” Libby wrote.

Asks Lewis: “Was that phrase a simple attempt at a literary turn? Or was it a veiled plea for Ms. Miller to ‘turn’ with him and back up Mr. Libby’s account that he had not disclosed Ms. Wilson’s identity to her?”

The trial of Scooter Libby on perjury and obstruction-of-justice charges, now entering its second week, may not clear up the mystery of the clustered Aspens, if it is a mystery at all. And it may not clear up the new mystery, raised yesterday by Libby’s crack defense team, of whether their client was being sacrificed by White House operatives to protect Karl Rove. But the investigation of Libby and the Plame leak has gone a considerable distance toward resolving another conundrum that has bedeviled our legal system for decades: namely, whether newsmen are above the law. When the Supreme Court refused to hear Judith Miller’s appeal of her imprisonment on contempt charges, it stood by its own precedent set in 1972 in Branzburg v. Hayes that journalists, like all other citizens, are obliged to testify before grand juries regarding potentially criminal activities, including the criminal activities of their confidential sources. The “public . . . has a right to every man’s evidence,” ruled the Court.

A coalition of First Amendment activists and journalism associations is now lobbying Congress to overturn the Supreme Court’s decision by passing legislation that would create a “reporter’s privilege.” With the Democrats in power in Congress, the prospects for success are now better than they have been for a generation. But at a moment when the country is facing mortal threats from Islamic fanatics and the press has been publishing counterterrorism secrets with reckless abandon, we need a reporter’s privilege as badly as the New York Times needs another Jayson Blair. As I argue in the February issue of COMMENTARY, such a law would manage to damage our national security and do violence to the First Amendment in a single swoop.

To help defray the considerable costs of defending Scooter Libby, send a check payable to:

Libby Legal Defense Trust
2100 M Street, NW Suite 170-362
Washington, DC 20037-1233

To help defray the even more considerable costs of prosecuting Scooter Libby, send a check payable to:

Gifts to the United States
U.S. Department of the Treasury
Credit Accounting Branch
3700 East-West Highway, Room 6D37
Hyattsville, MD 20782

Did Scooter Libby write a letter to New York Times reporter Judith Miller in prison containing a coded hint that she should back up his story in court? So asks New York Times reporter Neil Lewis, referring to a curious passage in a missive Libby mailed to Miller in September 2005, ostensibly releasing her from her pledge not to reveal him as her source for the identity of CIA operative Valery Plame Wilson, but possibly suggesting something else entirely. “Out West, where you vacation, the aspens will already be turning. They turn in clusters because their roots connect them,” Libby wrote.

Asks Lewis: “Was that phrase a simple attempt at a literary turn? Or was it a veiled plea for Ms. Miller to ‘turn’ with him and back up Mr. Libby’s account that he had not disclosed Ms. Wilson’s identity to her?”

The trial of Scooter Libby on perjury and obstruction-of-justice charges, now entering its second week, may not clear up the mystery of the clustered Aspens, if it is a mystery at all. And it may not clear up the new mystery, raised yesterday by Libby’s crack defense team, of whether their client was being sacrificed by White House operatives to protect Karl Rove. But the investigation of Libby and the Plame leak has gone a considerable distance toward resolving another conundrum that has bedeviled our legal system for decades: namely, whether newsmen are above the law. When the Supreme Court refused to hear Judith Miller’s appeal of her imprisonment on contempt charges, it stood by its own precedent set in 1972 in Branzburg v. Hayes that journalists, like all other citizens, are obliged to testify before grand juries regarding potentially criminal activities, including the criminal activities of their confidential sources. The “public . . . has a right to every man’s evidence,” ruled the Court.

A coalition of First Amendment activists and journalism associations is now lobbying Congress to overturn the Supreme Court’s decision by passing legislation that would create a “reporter’s privilege.” With the Democrats in power in Congress, the prospects for success are now better than they have been for a generation. But at a moment when the country is facing mortal threats from Islamic fanatics and the press has been publishing counterterrorism secrets with reckless abandon, we need a reporter’s privilege as badly as the New York Times needs another Jayson Blair. As I argue in the February issue of COMMENTARY, such a law would manage to damage our national security and do violence to the First Amendment in a single swoop.

To help defray the considerable costs of defending Scooter Libby, send a check payable to:

Libby Legal Defense Trust
2100 M Street, NW Suite 170-362
Washington, DC 20037-1233

To help defray the even more considerable costs of prosecuting Scooter Libby, send a check payable to:

Gifts to the United States
U.S. Department of the Treasury
Credit Accounting Branch
3700 East-West Highway, Room 6D37
Hyattsville, MD 20782

Read Less

Backroom Dealing on the Golan

Whether the back-channel “Israeli-Syrian negotiations” whose existence was revealed last week (and expanded upon Sunday) by the Hebrew daily Haaretz were, in fact, as claimed by the newspaper, really government-level talks, or whether they were simply an exchange between two private individuals, ex-director general of Israel’s foreign ministry Alon Liel and Washington-based Syrian businessman Ibrahim Suleiman, is largely an artificial question. Both governments knew of the talks, which reportedly involved an offer on Liel’s part for a complete Israeli withdrawal from the Golan Heights to the pre-Six Day war lines of June 4, 1967, and both, even if they took no active role in them, could have put a stop to them had they wanted to. They didn’t. Governments that encourage such unofficial mediation are not necessarily committed to its results, but neither are they uninterested in them.

That a government of Israel would consider, as several Israeli governments have done, a withdrawal from the entire Golan in return for a peace agreement with Syria that may or may not be honored in the long run is all but incomprehensible. That an Israeli government would consider withdrawing to the lines of June 4, 1967, at which time the Syrian army was illegally occupying several dozen square kilometers of territory along the Sea of Galilee and the Jordan River that were officially part of Israel, is wholly incomprehensible.

There are many excellent reasons why Israel should never cede the whole Golan to Syria—military factors, water rights, tourism, national pride, the untrustworthiness of Syrian intentions, the unpredictability of Syrian politics, and the Golan’s having been officially annexed by Israel in 1982, thus making it as much a part of the country as is Tel Aviv or Jerusalem. But of all possible reasons, none is so logically absurd to overlook as the fact that, by repeatedly demanding an Israeli withdrawal to the June 4 lines, Syria has also repeatedly repudiated the 1923 border between it and Palestine drawn by the then-occupying colonial powers of France and England—the only Israeli-Syrian frontier ever recognized by international law. That a succession of Israeli governments has nevertheless continued to regard this border as a starting point for negotiations with Syria instead of trumpeting Syria’s own, repeated repudiation of it is, to my mind, one of the greatest stupidities of Israeli diplomacy.

Whether the back-channel “Israeli-Syrian negotiations” whose existence was revealed last week (and expanded upon Sunday) by the Hebrew daily Haaretz were, in fact, as claimed by the newspaper, really government-level talks, or whether they were simply an exchange between two private individuals, ex-director general of Israel’s foreign ministry Alon Liel and Washington-based Syrian businessman Ibrahim Suleiman, is largely an artificial question. Both governments knew of the talks, which reportedly involved an offer on Liel’s part for a complete Israeli withdrawal from the Golan Heights to the pre-Six Day war lines of June 4, 1967, and both, even if they took no active role in them, could have put a stop to them had they wanted to. They didn’t. Governments that encourage such unofficial mediation are not necessarily committed to its results, but neither are they uninterested in them.

That a government of Israel would consider, as several Israeli governments have done, a withdrawal from the entire Golan in return for a peace agreement with Syria that may or may not be honored in the long run is all but incomprehensible. That an Israeli government would consider withdrawing to the lines of June 4, 1967, at which time the Syrian army was illegally occupying several dozen square kilometers of territory along the Sea of Galilee and the Jordan River that were officially part of Israel, is wholly incomprehensible.

There are many excellent reasons why Israel should never cede the whole Golan to Syria—military factors, water rights, tourism, national pride, the untrustworthiness of Syrian intentions, the unpredictability of Syrian politics, and the Golan’s having been officially annexed by Israel in 1982, thus making it as much a part of the country as is Tel Aviv or Jerusalem. But of all possible reasons, none is so logically absurd to overlook as the fact that, by repeatedly demanding an Israeli withdrawal to the June 4 lines, Syria has also repeatedly repudiated the 1923 border between it and Palestine drawn by the then-occupying colonial powers of France and England—the only Israeli-Syrian frontier ever recognized by international law. That a succession of Israeli governments has nevertheless continued to regard this border as a starting point for negotiations with Syria instead of trumpeting Syria’s own, repeated repudiation of it is, to my mind, one of the greatest stupidities of Israeli diplomacy.

Read Less

The New Anti-Islamist Intelligentsia

Yesterday Michael Gove, a Tory member of Parliament and the author of Celsius 7/7, a hard-hitting study of the London subway bombers, asked an audience of the New Culture Forum a highly pertinent question: “Are we seeing the emergence of a new anti-Islamist intelligentsia?”

Gove answered his own question emphatically in the affirmative, and provided chapter and verse, too. What adds lustre to his thesis is the remarkable fact that the most prominent voices now being heard in protest against the scandalous alliance of the Left with Islamo-fascism are themselves for the most part intellectuals with impeccable Left-liberal credentials. Gove singled out the journalists Nick Cohen (whose book What’s Left? How the Liberals Lost Their Way chronicles the Left’s great self-betrayal), David Aaronovich (who defected from the Guardian to the Times of London), and Christopher Hitchens, who needs no introduction for American readers. Nick Cohen is also a leading light among the group of liberal academics and writers who last year signed the Euston Manifesto, distancing themselves from the Leftist consensus.

Most remarkable of all, three of the most celebrated British novelists—Salman Rushdie, Ian McEwan, and Martin Amis—have all come out strongly against Islamism. Amis even describes himself as an “Islamismophobe,” but the real objects of his hatred are the “middle-class white demonstrators last August waddling around under placards saying ‘We Are All Hizbollah Now.’” As he observes, “People of liberal sympathies, stupefied by relativism, have become the apologists for a creedal wave that is racist, misogynist, homophobic, imperialist, and genocidal. To put it another way, they are up the arse of those that want them dead.”

All of these prodigal sons are more than welcome in their return to what those who have always defended it fondly persist in calling Western civilization. Like many others, I have not forgotten Martin Amis’s essay “Fear and Loathing,” published in the Guardian a week after 9/11, in which he wrote: “The message of September 11 ran as follows: America, it is time you learned how implacably you are hated. . . . We would hope that the response will be, above all, non-escalatory.” He and his intellectual compatriots have come a long way since then—at least on seeing the threat of radical Islam for what it is.

Yesterday Michael Gove, a Tory member of Parliament and the author of Celsius 7/7, a hard-hitting study of the London subway bombers, asked an audience of the New Culture Forum a highly pertinent question: “Are we seeing the emergence of a new anti-Islamist intelligentsia?”

Gove answered his own question emphatically in the affirmative, and provided chapter and verse, too. What adds lustre to his thesis is the remarkable fact that the most prominent voices now being heard in protest against the scandalous alliance of the Left with Islamo-fascism are themselves for the most part intellectuals with impeccable Left-liberal credentials. Gove singled out the journalists Nick Cohen (whose book What’s Left? How the Liberals Lost Their Way chronicles the Left’s great self-betrayal), David Aaronovich (who defected from the Guardian to the Times of London), and Christopher Hitchens, who needs no introduction for American readers. Nick Cohen is also a leading light among the group of liberal academics and writers who last year signed the Euston Manifesto, distancing themselves from the Leftist consensus.

Most remarkable of all, three of the most celebrated British novelists—Salman Rushdie, Ian McEwan, and Martin Amis—have all come out strongly against Islamism. Amis even describes himself as an “Islamismophobe,” but the real objects of his hatred are the “middle-class white demonstrators last August waddling around under placards saying ‘We Are All Hizbollah Now.’” As he observes, “People of liberal sympathies, stupefied by relativism, have become the apologists for a creedal wave that is racist, misogynist, homophobic, imperialist, and genocidal. To put it another way, they are up the arse of those that want them dead.”

All of these prodigal sons are more than welcome in their return to what those who have always defended it fondly persist in calling Western civilization. Like many others, I have not forgotten Martin Amis’s essay “Fear and Loathing,” published in the Guardian a week after 9/11, in which he wrote: “The message of September 11 ran as follows: America, it is time you learned how implacably you are hated. . . . We would hope that the response will be, above all, non-escalatory.” He and his intellectual compatriots have come a long way since then—at least on seeing the threat of radical Islam for what it is.

Read Less

Bush’s Health-care Vision

This year’s batch of controlled leaks building up to the State of the Union address has included a lot of talk about health-care proposals. It looks as if the President will offer two new health-care ideas in the speech: one involving a reform of the tax code and the other supporting state efforts to help the uninsured get private health insurance.

The President seems set to propose replacing the long-standing system of tax exemption on employer-purchased health insurance. This system makes it more expensive for Americans not covered through their job to get health insurance on their own and creates an incentive for employer-based plans to grow ever more costly (as Eric Cohen and I point out in the February issue of COMMENTARY). The President wants to put in its place a standard deduction for health insurance of $15,000 for families and $7,500 for individuals.

Anyone who has private health insurance, regardless of how it was purchased, would qualify under this plan. About 80 percent of workers who are now covered at work have plans that cost less than $15,000 and so would see their taxes go down or stay the same under this proposal, but the other 20 percent of those with employer-based coverage would end up paying more taxes. The money brought in by this measure would help cover the cost of allowing families who buy their coverage themselves to get in on the tax deduction. This would lower their health costs, and would be a major incentive for the uninsured who can afford it to purchase their own coverage.

The trouble is that those 20 percent are not just fat-cat CEO’s with extravagant health plans, but also some unionized workers, whose unions have negotiated particularly good coverage. The new Democratic majority in Congress is very unlikely to stand for a tax increase on its union constituency. And many Democrats also fear such proposals would reduce the pressure for a government-run system, their preferred health-care solution.

In the face of such opposition, the second proposal under discussion may be both more significant and more realistic. The President apparently intends to propose means of helping states turn the Medicaid funds they now use to pay hospitals for caring for the uninsured into direct assistance to uninsured individuals to buy their own private health insurance. Eric and I lay out the benefits of such an approach in our article, but we also point out that neither the Left nor the Right wants to discuss the real looming fiscal crisis in health care: the costs of care for older Americans. If what we see in the papers is right, that won’t change this year.

This year’s batch of controlled leaks building up to the State of the Union address has included a lot of talk about health-care proposals. It looks as if the President will offer two new health-care ideas in the speech: one involving a reform of the tax code and the other supporting state efforts to help the uninsured get private health insurance.

The President seems set to propose replacing the long-standing system of tax exemption on employer-purchased health insurance. This system makes it more expensive for Americans not covered through their job to get health insurance on their own and creates an incentive for employer-based plans to grow ever more costly (as Eric Cohen and I point out in the February issue of COMMENTARY). The President wants to put in its place a standard deduction for health insurance of $15,000 for families and $7,500 for individuals.

Anyone who has private health insurance, regardless of how it was purchased, would qualify under this plan. About 80 percent of workers who are now covered at work have plans that cost less than $15,000 and so would see their taxes go down or stay the same under this proposal, but the other 20 percent of those with employer-based coverage would end up paying more taxes. The money brought in by this measure would help cover the cost of allowing families who buy their coverage themselves to get in on the tax deduction. This would lower their health costs, and would be a major incentive for the uninsured who can afford it to purchase their own coverage.

The trouble is that those 20 percent are not just fat-cat CEO’s with extravagant health plans, but also some unionized workers, whose unions have negotiated particularly good coverage. The new Democratic majority in Congress is very unlikely to stand for a tax increase on its union constituency. And many Democrats also fear such proposals would reduce the pressure for a government-run system, their preferred health-care solution.

In the face of such opposition, the second proposal under discussion may be both more significant and more realistic. The President apparently intends to propose means of helping states turn the Medicaid funds they now use to pay hospitals for caring for the uninsured into direct assistance to uninsured individuals to buy their own private health insurance. Eric and I lay out the benefits of such an approach in our article, but we also point out that neither the Left nor the Right wants to discuss the real looming fiscal crisis in health care: the costs of care for older Americans. If what we see in the papers is right, that won’t change this year.

Read Less

From COMMENTARY: Health Care in Three Acts

As President Bush prepares to address the issue of health care in his State of the Union address, COMMENTARY is fortunate to have a trenchant analysis of the wider problem, “Health Care in Three Acts,” by Eric Cohen and Yuval Levin, coming out in the February issue. Here is an advance look.

Americans say they are very worried about health care: on generic lists of voter concerns, health issues regularly rank just behind terrorism and the Iraq war. And politicians are eager to do something about it. To empower consumers, the White House has advanced the idea of Health Savings Accounts; to help the uninsured, it has explored using Medicaid more creatively. Senator Edward Kennedy of Massachusetts, the Democrats’ leader on this issue, has backed “Medicare for all.” The American Medical Association has called for tax credits to put private coverage within reach of more Americans. A number of recent books have proposed solutions to our health-care problems ranging from socialized medicine on the Left to laissez-faire schemes of cost containment on the Right. In Washington and in the state capitals, pressure is building for serious reforms.

But what exactly are Americans worried about? Untangling that question is harder than it looks. In a 2006 poll, the Kaiser Family Foundation found that while a majority proclaimed themselves dissatisfied with both the quality and the cost of health care in general, fully 89 percent said they were satisfied with the quality of care they themselves receive. Eighty-eight percent of those with health insurance rated their coverage good or excellent—the highest approval rating since the survey began 15 years ago. A modest majority, 57 percent, were satisfied even with its cost.

Read More

As President Bush prepares to address the issue of health care in his State of the Union address, COMMENTARY is fortunate to have a trenchant analysis of the wider problem, “Health Care in Three Acts,” by Eric Cohen and Yuval Levin, coming out in the February issue. Here is an advance look.

Americans say they are very worried about health care: on generic lists of voter concerns, health issues regularly rank just behind terrorism and the Iraq war. And politicians are eager to do something about it. To empower consumers, the White House has advanced the idea of Health Savings Accounts; to help the uninsured, it has explored using Medicaid more creatively. Senator Edward Kennedy of Massachusetts, the Democrats’ leader on this issue, has backed “Medicare for all.” The American Medical Association has called for tax credits to put private coverage within reach of more Americans. A number of recent books have proposed solutions to our health-care problems ranging from socialized medicine on the Left to laissez-faire schemes of cost containment on the Right. In Washington and in the state capitals, pressure is building for serious reforms.

But what exactly are Americans worried about? Untangling that question is harder than it looks. In a 2006 poll, the Kaiser Family Foundation found that while a majority proclaimed themselves dissatisfied with both the quality and the cost of health care in general, fully 89 percent said they were satisfied with the quality of care they themselves receive. Eighty-eight percent of those with health insurance rated their coverage good or excellent—the highest approval rating since the survey began 15 years ago. A modest majority, 57 percent, were satisfied even with its cost.

Evidently, though, this widespread contentment with one’s own lot coexists with concern on two other fronts. Thus, in the very same Kaiser poll, nearly 90 percent considered the number of Americans without health insurance to be a serious or critical national problem. Similarly, a majority of those with insurance of their own fear that they will lose their coverage if they change jobs, or that, “in the next few years,” they will no longer be able to afford the coverage they have. At least as troubling is what the public does not seem terribly bothered about—namely, the dilemmas of end-of-life care in a rapidly aging society and the exploding costs of Medicare as the baby-boom generation hits age sixty-five.

All of this makes it difficult to speak of health care as a single coherent challenge, let alone to propose a single workable solution. In fact, America faces three fairly distinct predicaments, affecting three fairly distinct portions of the population—the poor, the middle class, and the elderly—and each of them calls for a distinct approach.

For the poor, the problem is affording coverage. Forty-six million Americans were uninsured in 2005, according to the Census Bureau. This is about 15.9 percent of the population, which has been the general range now for more than a decade, peaking at 16.3 percent in 1998.

But that stark figure fails to convey the shifting face and varied make-up of the uninsured. On average, a family that loses its coverage will become insured again in about five months, and only one-sixth of the uninsured lack coverage for two years or more. In addition, about a fifth of the uninsured are not American citizens, and therefore could not readily benefit from most proposed reforms. Roughly a third of the uninsured are eligible for public-assistance programs (especially Medicaid) but have not signed up, while another fifth (many of them young adults, under thirty-five) earn more than $50,000 a year but choose not to buy coverage.

It is also crucial to distinguish between a lack of insurance coverage and a lack of health care. American hospitals cannot refuse patients in need who are without insurance; roughly $100 billion is spent annually on care for such patients, above and beyond state and federal spending on Medicaid. The trouble is that most of this is emergency care, which includes both acute situations that might have been prevented and minor problems that could have been treated in a doctor’s office for considerably less money. The real problem of the uninsured poor, then, is not that they are going without care, but that their lack of regular and reliable coverage works greatly to the detriment of their family stability and physical well-being, and is also costly to government.

For the middle class, the problem is different: the uncertainty caused in part by the rigid link between insurance and employment and in part by the vicissitudes of health itself. America’s employment-based insurance system is unique in the world, a product of historical circumstances and incremental reforms that have made health care an element of compensation for work rather than either a simple marketplace commodity or a government entitlement. This system now covers roughly 180 million Americans. It works well for the vast majority of them, but the link it creates between one’s job and one’s health coverage, and the peculiar economic inefficiencies it yields, result in ever-mounting costs for employers and, in an age of high job mobility, leave many families anxious about future coverage even in good times.

The old, finally, face yet another set of problems: the steep cost of increasingly advanced care (which threatens to paralyze the government) and the painful decisions that come at the limits of medicine and the end of life. Every American over sixty-five is eligible for at least some coverage by the federal Medicare program, which pays much of the cost of most hospital stays, physician visits, laboratory services, diagnostic tests, outpatient services, and, as of 2006, prescription drugs. Established in 1965, Medicare is funded in part by a flat payroll tax of 2.9 percent on nearly every American worker and, beyond that, by general federal revenue. Most recipients pay only a monthly premium that now stands at $88.50, plus co-payments on many procedures and hospital stays.

But precisely because Medicare is largely funded by a payroll tax, it suffers acutely from the problems of an aging society. In 1950, just over 8 percent of Americans were over sixty-five. Today that figure stands at nearly 15 percent, and by 2030 it is expected to reach over 20 percent, or 71 million Americans. Moreover, the oldest of the old, those above the age of eighty-five, who require the most intense and costly care, are now the fastest growing segment of the population; their number is expected to quadruple in the next half-century.

For Medicare, therefore, just as for Social Security, the number of recipients is increasing while the number of younger workers to pay the bills is declining. But Medicare faces a greater danger still. Its costs are a function not only of the number of eligible recipients but of the price of the services they use. Over the past few years, health-care spending in America has increased by about 8 percent each year, most steeply for older Americans who have the most serious health problems. As these costs continue to rise much faster than the wages on which Medicare’s funding is based, the program’s fiscal decline will be drastic, with commensurately drastic consequences for the federal budget.

Three different “crises,” then, each of a different weight and character. The crisis of the uninsured, while surely a serious challenge, has often been overstated, especially on the Left, in an effort to promote more radical reforms than are necessary. The crisis of insured middle-class families has been misdiagnosed both by the Right, which sees it purely as a function of economic inefficiency, and by the Left, which sees it as an indictment of free-market medicine. And the crisis of Medicare has been vastly understated by everyone, in an effort to avoid taking the painful measures necessary to prevent catastrophe. In each case, a clearer understanding may help point the way to more reasonable reforms.

In the case of the uninsured, the best place to begin is with the solution most frequently proposed to their plight: a government-run system of health care for all Americans.

Under such a system—which exists in some form in most other industrialized democracies—the government pays everyone’s medical bills, and in many cases even owns and runs the health-care system itself. The appeal of this idea lies in its basic fairness and simplicity: everyone gets the same care, from the same source, in the same way, based purely on need. In one form or another—actual proposals have varied widely, with Hillary Clinton’s labyrinthine scheme of 1993 merely the best known of many—this “single-payer” model remains the preferred health-care solution of the American Left. But it is ill-suited to the actual problems of America’s uninsured, and adopting it would greatly exacerbate other problems as well.

Everywhere it has been tried, the single-payer model has yielded inefficient service and lower-quality care. In Britain today, more than 700,000 patients are waiting for hospital treatment. In Canada, it takes, on average, seventeen weeks to see a specialist after a referral. In Germany and France, roughly half of the men diagnosed with prostate cancer will die from the disease, while in the United States only one in five will. According to one study, 40 percent of British cancer patients in the mid-1990’s never got to see an oncologist at all.

Such dire statistics have in fact caused many Western democracies with single-payer systems to turn toward market mechanisms for relief. The Swedes have begun to privatize home care and laboratory services. Australia now offers generous tax incentives to citizens who eschew the public system for private care. To send a message to the government, the Canadian Medical Association recently elected as its president a physician who runs a private hospital in Vancouver, actually illegal in Canada. “This is a country in which dogs can get a hip replacement in under a week,” the new president told a newspaper interviewer, “while humans can wait two or three years.”

Defenders of the single-payer concept often point out that, despite patient complaints about the quality of care, overall measures of health in countries with such systems are roughly equivalent to those in America. That may be so, but the chief reason lies in social and cultural factors—crime rates, diet, and so forth—that make life in many other Western nations safer and healthier than life in America, and that would not be altered by a single-payer health system. Besides, citizens in those other nations benefit enormously from medical innovations produced and made possible by America’s dynamic private market; if that market were hobbled by a European-style bureaucracy, their quality of care would suffer along with ours.

And quality of care, it is important to remember, is one thing that most Americans are happy with. Any reform that promises to replace immediate access to specialists with long waiting lines, or the freedom to choose one’s own doctor with restrictive government mandates, is certain to evoke deep hostility, and thereby to cut into public support for efforts to help the uninsured.

On this score, proponents of socialized medicine would do well to consult the cautionary example of the health-maintenance organization (HMO). HMO’s are insurers who contract directly with providers, often for a flat fee, reviewing physician referrals and medical decisions in order to prevent unnecessary procedures or expenses. By the mid-1990’s, this capacity for cost-containment had made HMO’s very attractive to policy-makers and families alike. And they delivered on their cost-cutting promise. In those years, as David Gratzer notes in his recent book The Cure (Encounter, 325 pp., $25.95), private health-care spending per capita grew by just 2 percent annually (today the figure is nearly 10 percent, though the reasons for this, as we shall see below, go beyond just the decline of HMO’s).*

But the public soon chafed under the authoritarian character of a system in which case managers were entrusted with decisions that often seemed arbitrary, while doctors resented having their medical judgment questioned by bureaucrats. Participation soon declined, and HMO’s themselves began to take on the characteristics of traditional insurance plans. By the middle of this decade, they had joined the bipartisan list of stock American villains: in the 2004 presidential campaign, President Bush accused Senator John Kerry of getting “millions from executives at HMO’s,” while Kerry pledged to “free our government from the dominance of the lobbyists, the drug industry, big oil, and HMO’s—so that we can give America back its future and its soul.”

In a single-payer government system, everything Americans dislike about HMO’s would be worse: rationing, top-down control, perverse incentives, and, for patients, very little say. As has happened in Europe, a single-payer approach would also turn health-care costs entirely into government costs, grossly distorting public spending and threatening to crowd out other important government functions. The result would be a political, fiscal, and social disaster.

There is a better way to assist the uninsured: not universal government health care but universal private insurance coverage. Such an effort could begin by identifying the populations in need. Those who are uninsured by their own choice could be offered incentives to purchase at least some minimal coverage, or be penalized for failing to do so. Those who cannot afford insurance could be given subsidies to purchase private coverage based on their level of income, and then pooled into a common group to give them some purchasing power and options. Their coverage would still not equal that available to people in the most generous employer-based plans, but it would offer reliable access to care without destroying the quality and flexibility of the American system.

Although such a plan might not be cheap, it would not be nearly so expensive or complex as a single-payer system. The money for it could be taken, in part, from Medicaid funds now used to pay doctors and hospitals for care already provided to the uninsured, with such “uncompensated-care” programs gradually transformed into a voucher system for purchasing private coverage. But though it might rely on some federal dollars, the reform itself would best be undertaken and managed at the state level. After all, health insurance is regulated by the states, Medicaid is largely managed by the states, and different states face different challenges and possess different resources.

In Massachusetts and Florida, ideas like these are already being tested, although it is too early to judge the results. The federal government can help other states try this more practical approach by clearing away regulatory obstacles and by providing incentives for experiments in creative reforms.

This brings us to the health-care anxieties of middle-class Americans. Although these concerns are in most respects much less pressing than those of the poor, they are real enough. Middle-class families are, besides, the heart and soul of America’s culture and economy, as well as the essential political force for any sober assessment and improvement of America’s health-care system.

Generally speaking, the worries expressed by these Americans stem from the peculiarities of our employer-based insurance market. It is, indeed, a very odd thing that more than 180 million Americans should be covered by insurance purchased for them by their employers. The companies we work for do not buy our food and clothing, or our car and home insurance. They pay us for our labor, and we use that money to buy what we want.

No less odd is the character of what we call health insurance. Insurance usually means coverage for extreme emergencies or losses. We expect auto insurance to kick in when our car is badly damaged in an accident, not when we need a routine oil change; homeowner’s insurance covers us after a fire, flood, or break-in, not when we need to repair the deck or unclog the gutters. But when it comes to health, we expect some element of virtually every expense to be covered, including routine doctor checkups and regular care.

America’s insurance system is largely a historical accident. During World War II, the federal government imposed wage controls on American employers. No longer able to raise salaries to compete for employees, companies turned instead to offering the lure of fringe benefits, and the era of employer-based health care was born. Thanks to a 1943 IRS ruling allowing an exemption for money spent by employers on health insurance, an enormous tax incentive was created as well. Rather than giving a portion of every dollar to the government, employees could get a full dollar’s worth of insurance through their company.

Of course, wage controls are long gone, but the system they inadvertently created, including the tax exemption, remains in place. Although this system has served most Americans very well, it has two significant drawbacks. First, by forging a tight link between one’s job and one’s health insurance, it makes losing a job, or changing jobs, a scary proposition, especially for parents. Second, it lacks any serious check on costs. Because insurance often pays the bulk of every single bill (instead of kicking in only for emergencies or extreme expenses), most American families do not know, or attend to, the actual cost of their health care.

Any car owner can tell you the price of a gallon of gas or an oil change. But what is the price of knee surgery? Or even a regular doctor’s visit? Does one hospital or doctor charge more than another? Most patients pay only a deductible that, while often not cheap, bears almost no relation to the price of the service they receive. As a result, they do not behave like consumers, shopping for the best price and thereby forcing providers to compete for their dollar.

Inured to such issues, families worry most about the lack of portability of their insurance, leaving it to economists to worry about the distorting effects of price inefficiencies. To gain the support of middle-class parents, any reform to the system would therefore need to address the former issue first.

Policy-makers on the Left have tended to understand this, but have over-read the anxiety of families, seeing it as a broad indictment of America’s free-market health care. They have thus offered the same bad solution to the problems of the insured as they do to the problems of the uninsured: a government-run system that will replace our present one. As for conservative policy-makers, they sometimes tend to overlook the concerns of middle-class families altogether, focusing on inefficiency before portability.

The conservative health-care solution of the moment is the health savings account, or HSA. It has two components: a savings account to which individuals and employers can make tax-free contributions to be drawn on exclusively for routine health-care costs, and a high-deductible insurance plan to help pay for catastrophic expenses.

Since individuals can take their HSA’s with them when they change jobs (provided the new employer allows it), this option can indeed help promote insurance portability. But, generally speaking, that is neither its foremost aim nor its effect. Instead, it is seen by its proponents as helping to level the playing field by giving to individuals the same tax breaks that employers get in purchasing coverage, and as helping to train people to think like consumers, since in spending their own money they will have an incentive to spend as little of it as possible. In short, proponents of the HSA want to use market mechanisms to achieve lower costs and improved quality.

This is certainly a worthy goal—but does it meet the concerns of most Americans? David Gratzer, an advocate of the HSA, tells the story of a woman who used such an account in exactly the desired way. Needing foot surgery, and impelled to spend her own money wisely, she

took charge of the situation and thought about what she really needed. When a simple day-surgery was suggested, she looked around and decided on a local surgery center. She asked about clinic fees and offered to pay upfront—thereby getting a 50-percent discount. When she found out that an anesthetist would come in specifically to do the foot block, she asked her surgeon just to do it. She also negotiated the surgeon’s compensation down from $1,260 to $630. Finally, she got a prescription from her doctor for both antibiotics and painkillers, but only filled the former. “In the past, my attitude would have been, ‘just have all the prescriptions filled because insurance was paying for it, whether or not I need them.’”

Although Gratzer offers this as an ideal example, it will surely strike many people as a nightmare. Haggling with doctors, ignoring prescriptions, bypassing a specialist to save money—is this the solution to middle-class health-care worries? Who among us feels confident taking so much responsibility for judgments over his own health, let alone over the care of his children or his elderly parents?

If the HSA is to have wide appeal, it must be sold first and foremost as a means not of efficiency but of portability—and as part of a broader effort to expand the portability of health insurance generally. Nor should such an effort be aimed, at least at first, at undoing our employer-based system. Perhaps, given a blank slate, no sensible person would ever have designed the current system. But we do not have a blank slate. We have a system providing care that the vast majority of insured Americans are quite happy with—and that has also helped America resist the pressure for government-run health care of the kind for which every other developed nation is now paying a heavy price.

We have, in other words, a system that works but is in need of repairs, most notably in the realm of improved portability. Making this happen will require better cooperation between state and federal policy-makers. An exclusively national solution would require federalizing the regulation of health insurance, which is both undesirable and politically unachievable. Instead, states should be encouraged to develop insurance marketplaces like the one now taking shape in Massachusetts. Mediating between providers and purchasers, these would allow employers, voluntary groups, and individuals to select from a common set of private options. Whether working full-time, part-time, or not at all, individuals and families could choose from the same menu of plans and thus maintain constant coverage even as their job situations or life circumstances change. For those who cannot afford insurance and do not receive it from an employer, Medicaid dollars could be used to subsidize the purchase of a private plan.

The federal government, meanwhile, could ensure that Medicaid dollars allotted to states can be used to support such a structure of subsidies. It could also pursue other, smaller measures, like extending or eliminating the time limit on the COBRA program, which allows individuals leaving a job to keep their employment-based plan by paying the full premium. As states begin implementing marketplace reforms, the federal government could also find ways to encourage regional and eventually national marketplaces, which would enable the purchase of insurance across state lines.

In any such scheme, Health Savings Accounts would surely have a place. So would other measures of cost containment like greater price transparency. But the key to any large reform must be its promise to address the real worries of insured American families by preserving what is good about the current system while facing up to its limits and confronting its looming difficulties.

Unfortunately, when it comes to paying for the health care of older Americans, there are few attractive options. Costs have risen steeply in recent years, while the economic footing of the Medicare program has been steadily eroding. Nor are demographic realities likely to change for at least a generation; to the contrary, they may only worsen. So the solution must involve some form of cost containment.

This will not be easy. As Arnold Kling points out in Crisis of Abundance (Cato Institute, 120 pp., $16.95), costs are rising not because of increasing prices for existing medical services but because of a profound transformation in the way medicine is practiced in America. Between 1975 and 2002, the U.S. population increased by 35 percent, but the number of physicians in the country grew by over 100 percent. The bulk of these were specialists, whose services cost a great deal more than those of general practitioners. New technologies of diagnosis (like MRI exams) have also become routine, and not just for the old, and the number and variety of treatments, including surgeries, have likewise increased. We spend more because more can be done for us.

All of this spells heavier demands on the Medicare budget, to the point where the program’s fiscal prospects have become very bleak. Already accounting for roughly 15 percent of federal spending, Medicare will be at 25 percent by 2030 and growing. In David Gratzer’s words, “Medicare threatens to be the program that ate the budget.”

Worse yet, one of the most expensive and complicated burdens of an aging society is not even covered by Medicare. This is long-term care, involving daily medical and personal assistance to people incapable of looking after themselves. The Congressional Budget Office estimates that Americans spent roughly $137 billion on long-term care in 2000, and that by 2020 the figure will reach $207 billion. Longer lives, and the high incidence of dementia among the oldest of the old, are bound to impose an extraordinary new financial strain on middle-income families, whose consequent demand for government help will only worsen our already looming fiscal crisis.

Medicaid, which covers health care for the poor, does pay for some long-term care in most states. To qualify for this, and to avoid burdening their children, a growing number of the elderly have opted to spend down their assets when the need arises. But this ends up burdening their children anyway, if less directly. States already spend more on Medicaid than on primary and secondary education combined; if Medicaid comes to shoulder the bulk of long-term costs in the coming decades, it will bankrupt state coffers and place enormous strains on the federal budget.

Of course, the challenges of an aging society reach well beyond economics. As more and more Americans face an extended decline in their final years, elderly patients and their families will confront painful choices about how much care is worthwhile, who should assume the burdens of care-giving, and when to forgo additional life-sustaining treatment. Compared to this profound human challenge, fiscal dilemmas can seem relatively paltry. But they too necessitate hard and unavoidable choices.

One way or another, the Medicare program will have to be adjusted to a society with radically different demographics from the one it was designed to serve. If “seventy is the new fifty,” as a popular bumper sticker tells us, then the age of Medicare eligibility must begin to move up as well. That will inevitably impose a hardship on those who are already not vigorous in their sixties, as well as on those whose jobs are too physically demanding for even a healthy sixty-five-year-old. So hand in hand with raising the age of eligibility will need to go programs encouraging (or requiring) health-care savings earlier in life. At the same time, Medicare benefits will gradually have to become means-tested, so that help goes where it is most needed and benefits are most generous to those with the lowest incomes and fewest assets.

More fundamentally, the structure of the Medicare program will have to change. Its benefits now increase in an open-ended way that both reflects and drives the upward movement of health costs; if Medicare is to remain sustainable, constraints will gradually have to be put in place, so that benefits grow by a set percentage each year. The program will also need its own distinct and reasonably reliable funding source, which will require an adjustment in the design of the payroll tax.

Any such reforms will be politically explosive, to put it mildly. No politician in his right mind would run on a platform of limiting Medicare eligibility and capping its benefits. And yet, a decade from now, caring for aging parents will have become a burning issue for a great swath of America’s families as parents find themselves squeezed between the needs of their own parents and the needs of their children. Every politician will be expected to offer a solution, and will be subject to dangerous temptations: promising limitless care at the very moment when fiscal responsibility requires setting limits, or promising to “solve” our fiscal problems by abandoning the elderly. The least that responsible policy-makers can do now is to familiarize Americans with the realities of our aging society, so that when the time comes for difficult choices, we will not be blind-sided.

Understanding America’s three distinct health-care challenges, and the deficiencies of conventional responses to them, is the first step toward reform. Any approach we take will assuredly cost the taxpayers money. Already, nearly a third of the federal budget is spent on health-care, and that portion is certain to grow. The choice, however, is between paying the necessary price to ameliorate our genuine problems or paying far more to satisfy ideological whims or avoid politically painful decisions.

Neither socialized medicine nor a pure market approach is suited to America’s three health-care challenges, while the bipartisan conspiracy to ignore the looming crisis of Medicare in particular will return to haunt our children. Coming to grips with the true nature of our challenges suggests, instead, a set of pragmatic answers designed to address the real problems of the uninsured, of middle-class families, and of the elderly while protecting America’s private health-insurance system and looking out for the long-term fiscal health of the nation.

Even as we pursue practical options for reform, however, it behooves us to remember that health itself will always remain out of our ultimate control. Medicine works at the boundaries of life, and its limits remind us of our own. While our health-care system can be improved, our unease about health can never truly be quieted. And while reform will require hard decisions, solutions that would balance the books by treating the disabled and debilitated as unworthy of care are no solutions at all. In no small measure, America’s future vitality and character will depend upon our ability to rise to this challenge with the right mix of creativity and sobriety.

Read Less

Hillary’s “In”

So, after sixteen years on the national stage, Hillary Rodham Clinton has finally declared that she is a candidate for President of the United States. After years of denying this transparent ambition, it must be a relief to get it out there in the open.

No such relief was evident in the announcement video on her website, where she looked uncomfortable and sounded as stilted as ever. “I’m in, and I’m in to win” is a line that takes some panache to deliver—even a grin. But Hillary’s repertoire of dramatic tones is limited, ranging from prim high-mindedness (verging on the self-righteous) to faux regular-gal camaraderie. So when she talks about opening a “national conversation” and says, “let’s chat,” there is nothing authentic or inviting about it. It is, of course, an attempt to sound casual and open. But Hillary isn’t casual or open, so the effort falls flat.

Despite her iron discipline about refusing to indulge in self-revelation, her conviction that she is meant to wield power, and to edify the rest of us, nonetheless shines through. Though a feminist by belief and training, she owes most of her political success to her husband’s skills and successes. That particular opportunism galls in 2007, when plenty of women hold high office by their own efforts.

Hillary has behaved ruthlessly toward anyone who stood in her way, and in her campaigns she has trimmed endlessly on policy matters to hide her leftist views. Still, one might ask what those views mean given her willingness to trade them for power. In the Senate, her votes have been calibrated to give her cover as a responsible centrist. Now, especially on Iraq, she can say that she has had enough—and thus get the votes she needs from her party’s anti-war base.

Like many students of Hillary, I veer between thinking that this is as far as she can go politically and fearing that she will be our next President almost by default. The weak field of candidates for 2008, Democrat and Republican alike, offers little reassurance that she won’t be.

It should be an interesting race.

So, after sixteen years on the national stage, Hillary Rodham Clinton has finally declared that she is a candidate for President of the United States. After years of denying this transparent ambition, it must be a relief to get it out there in the open.

No such relief was evident in the announcement video on her website, where she looked uncomfortable and sounded as stilted as ever. “I’m in, and I’m in to win” is a line that takes some panache to deliver—even a grin. But Hillary’s repertoire of dramatic tones is limited, ranging from prim high-mindedness (verging on the self-righteous) to faux regular-gal camaraderie. So when she talks about opening a “national conversation” and says, “let’s chat,” there is nothing authentic or inviting about it. It is, of course, an attempt to sound casual and open. But Hillary isn’t casual or open, so the effort falls flat.

Despite her iron discipline about refusing to indulge in self-revelation, her conviction that she is meant to wield power, and to edify the rest of us, nonetheless shines through. Though a feminist by belief and training, she owes most of her political success to her husband’s skills and successes. That particular opportunism galls in 2007, when plenty of women hold high office by their own efforts.

Hillary has behaved ruthlessly toward anyone who stood in her way, and in her campaigns she has trimmed endlessly on policy matters to hide her leftist views. Still, one might ask what those views mean given her willingness to trade them for power. In the Senate, her votes have been calibrated to give her cover as a responsible centrist. Now, especially on Iraq, she can say that she has had enough—and thus get the votes she needs from her party’s anti-war base.

Like many students of Hillary, I veer between thinking that this is as far as she can go politically and fearing that she will be our next President almost by default. The weak field of candidates for 2008, Democrat and Republican alike, offers little reassurance that she won’t be.

It should be an interesting race.

Read Less

The Jewish Al Sharpton?

After a long absence from respectable circles, Jew-baiting is back.

When Patrick J. Buchanan denounced the 1991 U.S. military action to liberate Kuwait from Saddam Hussein, saying it had been cooked up by “Israel and its amen corner,” he largely sealed the doom of his political career. His remark, blaming the Jews for steering U.S. policy to actions that he alleged were in their own interest but not in America’s, made use of the classic anti-Semitic formula. Anti-Semitism, however, had been taboo in America for a generation or more, partly as a response to the Holocaust and partly due to the wider revulsion against bigotry occasioned by the civil-rights revolution. Commentators unloaded on Buchanan from many directions, led by the New York Times columnist A.M. Rosenthal.

Fifteen years later, however, anti-Semitism is becoming, more and more, an accepted part of national discourse. First, Harvard University published the fulminations of scholars John Mearsheimer and Stephen Walt (dissected in the pages of COMMENTARY by Gabriel Schoenfeld) accusing the “amen corner,” or in their term “the Israel Lobby,” of distorting U.S. policy to serve Israel rather than America. Then came former President Jimmy Carter’s book, blaming the Arab-Israel conflict entirely on the Jews, and claiming that this information had been kept from the American people by the pervasive and intimidating influence of certain “religious groups,” i.e., the Jews. (See my piece about Carter in the February issue of COMMENTARY.) Next came Democratic presidential aspirant, Wesley Clark, who commented recently that pressure for U.S. action against Iran’s nuclear weapons program was coming primarily from “New York money people.” Can you guess which religious/ethnic group he might be referring to?

Enter the New York Times, a paper famously Jewish-owned and long edited by A.M. Rosenthal, and therefore the target of many anti-Semitic conspiracy theories of the kind once propounded by cranks (and now routinely put forth by the likes of Carter, Walt, and Mearsheimer).

The Times‘s Sunday magazine of January 14 carried James Traub’s astounding hatchet job on Abe Foxman. Foxman is head of the Anti-Defamation League, which in Traub’s view, should long ago “have moved away from its original mission [of combating anti-Semitism] in favor of either promoting tolerance and diversity or leading the nonsectarian fight against extremism.” Instead, Foxman, a “hectoring” man of “spleen” who is “domineering” and “brazen,” “an anachronism” who resembles “a Cadillac-driving ward-heeler” and “stages public rituals of accusation,” insists perversely on “dwell[ing] imaginatively in the Holocaust.”

“It is tempting,” writes Traub, “to compare Abe Foxman with Al Sharpton, another portly, bellicose, melodramatizing defender of ethnic ramparts.” Leave aside that Sharpton is a notorious fraud who gave America the Tawana Brawley farce. More to the point is that for all the publicity that he succeeds in garnering, Sharpton represents no one but himself. Foxman, in contrast, is the chief of one of the leading, if not the leading, organizations through which American Jews defend their civil rights. Traub’s complaint that Foxman is obsessive about anti-Semitism is akin to assailing the head of, say, the NAACP for being overly sensitive to racism. But that’s an exposé you won’t read in the Times any time soon.

Apparently for the likes of Walt and Mearsheimer to bait the Jews is all right: Traub gives them extremely respectful treatment. But for Jews to defend themselves is, it seems, disgusting.

After a long absence from respectable circles, Jew-baiting is back.

When Patrick J. Buchanan denounced the 1991 U.S. military action to liberate Kuwait from Saddam Hussein, saying it had been cooked up by “Israel and its amen corner,” he largely sealed the doom of his political career. His remark, blaming the Jews for steering U.S. policy to actions that he alleged were in their own interest but not in America’s, made use of the classic anti-Semitic formula. Anti-Semitism, however, had been taboo in America for a generation or more, partly as a response to the Holocaust and partly due to the wider revulsion against bigotry occasioned by the civil-rights revolution. Commentators unloaded on Buchanan from many directions, led by the New York Times columnist A.M. Rosenthal.

Fifteen years later, however, anti-Semitism is becoming, more and more, an accepted part of national discourse. First, Harvard University published the fulminations of scholars John Mearsheimer and Stephen Walt (dissected in the pages of COMMENTARY by Gabriel Schoenfeld) accusing the “amen corner,” or in their term “the Israel Lobby,” of distorting U.S. policy to serve Israel rather than America. Then came former President Jimmy Carter’s book, blaming the Arab-Israel conflict entirely on the Jews, and claiming that this information had been kept from the American people by the pervasive and intimidating influence of certain “religious groups,” i.e., the Jews. (See my piece about Carter in the February issue of COMMENTARY.) Next came Democratic presidential aspirant, Wesley Clark, who commented recently that pressure for U.S. action against Iran’s nuclear weapons program was coming primarily from “New York money people.” Can you guess which religious/ethnic group he might be referring to?

Enter the New York Times, a paper famously Jewish-owned and long edited by A.M. Rosenthal, and therefore the target of many anti-Semitic conspiracy theories of the kind once propounded by cranks (and now routinely put forth by the likes of Carter, Walt, and Mearsheimer).

The Times‘s Sunday magazine of January 14 carried James Traub’s astounding hatchet job on Abe Foxman. Foxman is head of the Anti-Defamation League, which in Traub’s view, should long ago “have moved away from its original mission [of combating anti-Semitism] in favor of either promoting tolerance and diversity or leading the nonsectarian fight against extremism.” Instead, Foxman, a “hectoring” man of “spleen” who is “domineering” and “brazen,” “an anachronism” who resembles “a Cadillac-driving ward-heeler” and “stages public rituals of accusation,” insists perversely on “dwell[ing] imaginatively in the Holocaust.”

“It is tempting,” writes Traub, “to compare Abe Foxman with Al Sharpton, another portly, bellicose, melodramatizing defender of ethnic ramparts.” Leave aside that Sharpton is a notorious fraud who gave America the Tawana Brawley farce. More to the point is that for all the publicity that he succeeds in garnering, Sharpton represents no one but himself. Foxman, in contrast, is the chief of one of the leading, if not the leading, organizations through which American Jews defend their civil rights. Traub’s complaint that Foxman is obsessive about anti-Semitism is akin to assailing the head of, say, the NAACP for being overly sensitive to racism. But that’s an exposé you won’t read in the Times any time soon.

Apparently for the likes of Walt and Mearsheimer to bait the Jews is all right: Traub gives them extremely respectful treatment. But for Jews to defend themselves is, it seems, disgusting.

Read Less

Bookshelf

• Half a lifetime spent hanging out in smoke-filled nightclubs and harshly lit recording studios has persuaded me that the act of playing jazz is inherently photogenic. This being the case, I happily call your attention to Lee Tanner’s The Jazz Image: Masters of Jazz Photography (Abrams, 175 pp., $40), whose subtitle is right on the money. It contains 150-odd black-and-white pictures taken by most of the best photographers who have interested themselves in jazz, among them Bill Claxton, Bill Gottlieb, Milt Hinton (who was also one of the great jazz bassists), Herman Leonard, and Gjon Mili. Many of the images it contains will be instantly recognizable to anyone who has more than a passing acquaintance with jazz: Fats Waller eating a hot dog in Harlem, Lester Young sitting in a hotel room not long before his death, a cadaverous-looking Dave Tough warming up on a practice pad. Others are less familiar but no less striking. I wish there were more pictures from the 1930′s and fewer from the 1960′s, but everything that’s here is choice.

What struck me as I flipped through The Jazz Image was the intense characterfulness of the faces of the men and women portrayed within. Did anybody ever take a bad picture of Louis Armstrong or Duke Ellington? Some performers give the impression of being detached from the act of performance—take a look at the back-desk violinists the next time you go to a concert by a symphony orchestra—but great jazz musicians, whether on or off stage, almost always look larger than life. A few, most notably Bill Evans, actually give the impression of looking like the music they play.

• For some reason I’ve never gotten around to writing anything about Gilbert and Sullivan beyond the odd review. Don’t ask me why: I admire their operettas greatly, and after watching Mike Leigh’s Topsy-Turvy on TV last month, I had “My Object All Sublime” running through my head for the better part of a week. This inspired me to read Michael Ainger’s Gilbert and Sullivan: A Dual Biography (Oxford, 504 pp., $55), which somehow escaped my notice when it was published five years ago. It is, as advertised, a dual biography that covers the lives of both of its subjects quite thoroughly, before, during, and after the years of their professional association, and if you’re wondering how much of Topsy-Turvy is true, it’ll tell you exactly what you want to know. (Short answer: most of it.)

Like most Gilbert-and-Sullivanites, Ainger is not a professional scholar but an amateur enthusiast, and like all such folk, he revels in the accumulation of facts. As a result, Gilbert and Sullivan: A Dual Biography is a bit dry in spots, but never impossibly so, and though I wouldn’t recommend reading it solely for pleasure, I’m glad to say that I found it surprisingly pleasurable to read.

• Half a lifetime spent hanging out in smoke-filled nightclubs and harshly lit recording studios has persuaded me that the act of playing jazz is inherently photogenic. This being the case, I happily call your attention to Lee Tanner’s The Jazz Image: Masters of Jazz Photography (Abrams, 175 pp., $40), whose subtitle is right on the money. It contains 150-odd black-and-white pictures taken by most of the best photographers who have interested themselves in jazz, among them Bill Claxton, Bill Gottlieb, Milt Hinton (who was also one of the great jazz bassists), Herman Leonard, and Gjon Mili. Many of the images it contains will be instantly recognizable to anyone who has more than a passing acquaintance with jazz: Fats Waller eating a hot dog in Harlem, Lester Young sitting in a hotel room not long before his death, a cadaverous-looking Dave Tough warming up on a practice pad. Others are less familiar but no less striking. I wish there were more pictures from the 1930′s and fewer from the 1960′s, but everything that’s here is choice.

What struck me as I flipped through The Jazz Image was the intense characterfulness of the faces of the men and women portrayed within. Did anybody ever take a bad picture of Louis Armstrong or Duke Ellington? Some performers give the impression of being detached from the act of performance—take a look at the back-desk violinists the next time you go to a concert by a symphony orchestra—but great jazz musicians, whether on or off stage, almost always look larger than life. A few, most notably Bill Evans, actually give the impression of looking like the music they play.

• For some reason I’ve never gotten around to writing anything about Gilbert and Sullivan beyond the odd review. Don’t ask me why: I admire their operettas greatly, and after watching Mike Leigh’s Topsy-Turvy on TV last month, I had “My Object All Sublime” running through my head for the better part of a week. This inspired me to read Michael Ainger’s Gilbert and Sullivan: A Dual Biography (Oxford, 504 pp., $55), which somehow escaped my notice when it was published five years ago. It is, as advertised, a dual biography that covers the lives of both of its subjects quite thoroughly, before, during, and after the years of their professional association, and if you’re wondering how much of Topsy-Turvy is true, it’ll tell you exactly what you want to know. (Short answer: most of it.)

Like most Gilbert-and-Sullivanites, Ainger is not a professional scholar but an amateur enthusiast, and like all such folk, he revels in the accumulation of facts. As a result, Gilbert and Sullivan: A Dual Biography is a bit dry in spots, but never impossibly so, and though I wouldn’t recommend reading it solely for pleasure, I’m glad to say that I found it surprisingly pleasurable to read.

Read Less

The Bible as Blank Slate

In an ongoing, multi-part series called Blogging the Bible on Slate, David Plotz offers comments on his first reading of large parts of the Hebrew Bible. At his best he is superb. He is selling innocence and a new viewpoint—two commodities you might have believed the world was fresh out of when it comes to the Bible, the mightiest text of all, most famous and most exhaustively-studied book known to man. Yet, amazingly, it is all new to Plotz, and his loss is our gain: we experience his fascination, excitement, and occasional joy alongside him as he discovers the narrative genius and moral profundity of the good book.

But to reach these peaks of fine writing Plotz’s readers must slog through the usual nonsense about the alleged contradictions and cruelties of the Hebrew Bible, written with as much vigorous outrage as if these observations had just occurred to mankind yesterday afternoon. Worse is Plotz’s passivity: repeatedly he writes (frankly and openly) that “I don’t know” or “I wonder”—but virtually never cracks a book or calls in an expert to find out. He waits for the answer to come to him, in the form of emails from readers. His commentary suggests a whole new way to do research: if you want to learn about topic X, write an essay about it and your readers will teach you.
Read More

In an ongoing, multi-part series called Blogging the Bible on Slate, David Plotz offers comments on his first reading of large parts of the Hebrew Bible. At his best he is superb. He is selling innocence and a new viewpoint—two commodities you might have believed the world was fresh out of when it comes to the Bible, the mightiest text of all, most famous and most exhaustively-studied book known to man. Yet, amazingly, it is all new to Plotz, and his loss is our gain: we experience his fascination, excitement, and occasional joy alongside him as he discovers the narrative genius and moral profundity of the good book.

But to reach these peaks of fine writing Plotz’s readers must slog through the usual nonsense about the alleged contradictions and cruelties of the Hebrew Bible, written with as much vigorous outrage as if these observations had just occurred to mankind yesterday afternoon. Worse is Plotz’s passivity: repeatedly he writes (frankly and openly) that “I don’t know” or “I wonder”—but virtually never cracks a book or calls in an expert to find out. He waits for the answer to come to him, in the form of emails from readers. His commentary suggests a whole new way to do research: if you want to learn about topic X, write an essay about it and your readers will teach you.

This lack of curiosity may be deliberate. In his introduction to the series, Plotz tells us that his aim is to “find out what happens when an ignorant person actually reads the book on which his religion is based.” Undeniably this approach has its moments. When David sings his lament on the death of Saul and Jonathan, Plotz doesn’t recognize this most famous elegy in the history of the world. Yet he does recognize its greatness (all on his own, not because anyone tipped him off); and he is unfailingly honest about his ignorance. “David sings a gorgeous lament about the deaths [of Saul and son]. (Hey, language mavens! This song is the source of the phrase: `How the mighty are fallen.’)”

But innocence can be overdone—to the point where you question the author’s competence as a literate reader. In the middle of his discussion of Leviticus 19, which Plotz calls the “most glorious chapter of the Bible” (a lovely phrase), we read: “’Love your fellow as yourself’—Ever wonder where Jesus got ‘Love thy neighbor’? Not anymore.” The most famous sentence in the Hebrew Bible is news to Plotz. What does a man know if he doesn’t know this? Not that Plotz is alone in his ignorance—but ignorance this dramatic makes a peculiar basis for offering yourself as a commentator.

Of course any way you look at it, it takes plenty of swagger, arrogance, or what you will to write a commentary on a book you have only read in translation, consulting no commentaries in the process. Plotz notes that “In second Creation [the story beginning in Genesis 2:4], the woman is made to be man’s ‘helper.’ In Chapter 1 they are made equal.” But this word “helper,” which troubles Plotz, originates in one of the most celebrated untranslatables of the Bible. God actually says, in Genesis 2:18, that He will create Eve to be ezer k’negdo; the King James Bible translates, “I will make him [Adam] an help meet for him” (whence the word “helpmeet”). Actually the preposition neged (as in k’negdo) means “in sight of” or “standing opposite to” or “over and against.” The sentence ought to be translated, “I will make him a helper standing eye-to-eye with him,” or “a helper as his counterpart”—as most modern commentaries point out. Eve is Adam’s assistant, but she measures up to Adam; she is Adam’s counterpart; in no sense is she a lesser human being. Hence one of the most astounding sentences in the Bible, which Plotz passes over without a word, in Genesis 2:24: “Therefore shall a man leave his father and mother, and shall cleave unto his wife, and they shall be one flesh.” A man will leave his parents, a man will cleave to his wife? Ancient listeners would have stopped dead in their tracks. But Plotz keeps right on going.

It is sadly typical of modern intellectual life that Plotz is willing to be honestly, innocently surprised by nearly anything in the Bible except its frequent departures from anti-feminist type-casting. But his most serious error is to misrepresent the very process of Jewish Bible reading. He calls himself a “proud Jew” (more power to him); he acknowledges the immense quantity of rabbinic Bible commentary (in the Talmud and midrash) of which he is ignorant. But he fails to grasp that normative Jewish authorities do not read the Bible alongside the Talmud but through the Talmud. Thus he includes, for example, the usual tiresome stuff about all the death sentences imposed by Biblical law. But as Judaism reads these verses, there are no death sentences in the Bible: the Talmud (for better or worse) erects such elaborate procedural protections for the accused in capital cases that it virtually rules executions out. Which has been pointed out innumerable times before.

It might be fairest to say in the end that Plotz’s sins are the sins of his era and medium, but his virtues are his own. He is sometimes rambling and shallow—but Internet prose encourages shallow rambles. He is ignorant of religion and the Bible, but so are most educated people nowadays. On political topics he speaks with the freshness and spontaneity of a wind-up doll—after the defeat of the Israelites at Ai, Plotz writes, “A devastated Joshua tears his clothes in mourning, and tries to figure out what went wrong (Don’t you wish our leaders took war as seriously?)” But that’s life in America’s intellectual elite. On the other hand he writes with honesty and integrity and—on the whole—a sharp eye for brilliant prose and deep moral philosophy. Blogging the Bible is illuminating in more ways than one. Enjoy it, but read at your own risk.

Read Less

Barack’s Big Adventure

Barack Obama has formed a presidential exploratory committee, and is expected to announce his candidacy formally on February 10.

There’s a surprise.

Who doesn’t have an exploratory committee? Even Christopher Dodd has one. This is a very rich country, and it seems to behoove many people to give money to politicians for any semi-plausible reason. For the politicians themselves there is virtually no downside: running, becoming a national figure, losing and learning from your mistakes is excellent practice for—next time. Besides, it is so much more fun than being a serious Senator, engaged in the dull business of making policy choices and then making them again when the first set fail in unanticipated ways.

Some might find it offensively arrogant for a neophyte, with two years in the Senate, no experience running anything, and a thin resume to seek the nation’s highest office. But it’s hard to argue with the reception Obama has gotten. At a moment when national politics increasingly resembles a reality-TV show, his breezy, confident manner, good looks, and natural speaking talent all add up to a version of plausibility.

“Running for the Presidency is a profound decision, a decision no one should make on the basis of media hype or personal ambition alone,” he announced with a straight face on Wednesday. I must have missed the part of the announcement where he revealed the substantive rationale for his candidacy.

Obama is the perfect fresh face, the new “it girl,” on whom the left end of a very disenchanted electorate can project their hopes and dreams for . . . something different. He’s black, but not militant, not Al Sharpton. White mom, absent African dad: almost like Tiger Woods.

But then there’s the Clinton factor. The media are playing Obama’s candidacy as a big “diss” to Hillary on the part of Democratic primary voters who may regard her nomination as inevitable but are not particularly enthusiastic about the prospect. And she seems to be obliging them, by looking worried. But at the end of the day? I’d bet on Clintonian discipline and ruthlessness.

In fact, Obama is a pretty good foil for Hillary. He makes her look experienced, reasonable, mature, serious. And did I mention mature?

Barack Obama has formed a presidential exploratory committee, and is expected to announce his candidacy formally on February 10.

There’s a surprise.

Who doesn’t have an exploratory committee? Even Christopher Dodd has one. This is a very rich country, and it seems to behoove many people to give money to politicians for any semi-plausible reason. For the politicians themselves there is virtually no downside: running, becoming a national figure, losing and learning from your mistakes is excellent practice for—next time. Besides, it is so much more fun than being a serious Senator, engaged in the dull business of making policy choices and then making them again when the first set fail in unanticipated ways.

Some might find it offensively arrogant for a neophyte, with two years in the Senate, no experience running anything, and a thin resume to seek the nation’s highest office. But it’s hard to argue with the reception Obama has gotten. At a moment when national politics increasingly resembles a reality-TV show, his breezy, confident manner, good looks, and natural speaking talent all add up to a version of plausibility.

“Running for the Presidency is a profound decision, a decision no one should make on the basis of media hype or personal ambition alone,” he announced with a straight face on Wednesday. I must have missed the part of the announcement where he revealed the substantive rationale for his candidacy.

Obama is the perfect fresh face, the new “it girl,” on whom the left end of a very disenchanted electorate can project their hopes and dreams for . . . something different. He’s black, but not militant, not Al Sharpton. White mom, absent African dad: almost like Tiger Woods.

But then there’s the Clinton factor. The media are playing Obama’s candidacy as a big “diss” to Hillary on the part of Democratic primary voters who may regard her nomination as inevitable but are not particularly enthusiastic about the prospect. And she seems to be obliging them, by looking worried. But at the end of the day? I’d bet on Clintonian discipline and ruthlessness.

In fact, Obama is a pretty good foil for Hillary. He makes her look experienced, reasonable, mature, serious. And did I mention mature?

Read Less

Louis Kahn at Yale

The Yale Art Gallery, which reopened last month after a three-year renovation, eminently warrants a visit, but not only for its collection. That collection, to be sure, is splendid, highlighted by Vincent van Gogh’s Night Café (1888), his deeply disconcerting interior “with an atmosphere like the devil’s furnace.” But the building itself is a major work of Louis I. Kahn (1901-1974) and a visit reminds us why he was as important an architect in the second half of the 20th century as Frank Lloyd Wright was in the first.

Kahn was a late bloomer who came right down to the wire, creating no works of distinction or originality until he was fifty. This was the dilemma of his entire generation, which was steeped in the academic classicism of the Ecole des Beaux Arts. Their aesthetic was rendered obsolete almost overnight after 1929, first with the Depression and then with the arrival of European modernists fleeing Nazi Germany. Kahn embraced the flowing space and the abstract volumetric play of modernism, but he never quite jettisoned his classical roots.

Now disencumbered of its later accretions, the Yale Art Gallery depicts Kahn just as he was struggling to reconcile modernism with the lessons of architectural history. All of the devices of high modernism appear in it: the flat roof, the flowing interior space, the laconic expression of wall planes, even the innovative space frame that demonstratively carries the ceiling (or rather rhetorically carries it, since Kahn’s proposed system was too progressive for local building laws). And yet the building also has about it a sense of profound weight and solemnity that recalls the great monuments of the ancient world.

In a certain sense, the building is a failure, for Kahn could not integrate his ideas. The austere masonry cylinders in which the stairs are threaded speak a different architectural language than the all-glass wall facing the building’s courtyard. A decade would pass before his personal architectural language would emerge in such masterpieces as the Salk Institute in La Jolla, California.

Our academic institutions do not always do right by their historic architecture, yet Yale has done so here. Even better, across the street from the gallery is Kahn’s Yale Center for British Art, his last building. I know of no other place in America where you can take the whole measure of an architect’s career so poignantly.

The Yale Art Gallery, which reopened last month after a three-year renovation, eminently warrants a visit, but not only for its collection. That collection, to be sure, is splendid, highlighted by Vincent van Gogh’s Night Café (1888), his deeply disconcerting interior “with an atmosphere like the devil’s furnace.” But the building itself is a major work of Louis I. Kahn (1901-1974) and a visit reminds us why he was as important an architect in the second half of the 20th century as Frank Lloyd Wright was in the first.

Kahn was a late bloomer who came right down to the wire, creating no works of distinction or originality until he was fifty. This was the dilemma of his entire generation, which was steeped in the academic classicism of the Ecole des Beaux Arts. Their aesthetic was rendered obsolete almost overnight after 1929, first with the Depression and then with the arrival of European modernists fleeing Nazi Germany. Kahn embraced the flowing space and the abstract volumetric play of modernism, but he never quite jettisoned his classical roots.

Now disencumbered of its later accretions, the Yale Art Gallery depicts Kahn just as he was struggling to reconcile modernism with the lessons of architectural history. All of the devices of high modernism appear in it: the flat roof, the flowing interior space, the laconic expression of wall planes, even the innovative space frame that demonstratively carries the ceiling (or rather rhetorically carries it, since Kahn’s proposed system was too progressive for local building laws). And yet the building also has about it a sense of profound weight and solemnity that recalls the great monuments of the ancient world.

In a certain sense, the building is a failure, for Kahn could not integrate his ideas. The austere masonry cylinders in which the stairs are threaded speak a different architectural language than the all-glass wall facing the building’s courtyard. A decade would pass before his personal architectural language would emerge in such masterpieces as the Salk Institute in La Jolla, California.

Our academic institutions do not always do right by their historic architecture, yet Yale has done so here. Even better, across the street from the gallery is Kahn’s Yale Center for British Art, his last building. I know of no other place in America where you can take the whole measure of an architect’s career so poignantly.

Read Less




Welcome to Commentary Magazine.
We hope you enjoy your visit.
As a visitor to our site, you are allowed 8 free articles this month.
This is your first of 8 free articles.

If you are already a digital subscriber, log in here »

Print subscriber? For free access to the website and iPad, register here »

To subscribe, click here to see our subscription offers »

Please note this is an advertisement skip this ad
Clearly, you have a passion for ideas.
Subscribe today for unlimited digital access to the publication that shapes the minds of the people who shape our world.
Get for just
YOU HAVE READ OF 8 FREE ARTICLES THIS MONTH.
FOR JUST
YOU HAVE READ OF 8 FREE ARTICLES THIS MONTH.
FOR JUST
Welcome to Commentary Magazine.
We hope you enjoy your visit.
As a visitor, you are allowed 8 free articles.
This is your first article.
You have read of 8 free articles this month.
YOU HAVE READ 8 OF 8
FREE ARTICLES THIS MONTH.
for full access to
CommentaryMagazine.com
INCLUDES FULL ACCESS TO:
Digital subscriber?
Print subscriber? Get free access »
Call to subscribe: 1-800-829-6270
You can also subscribe
on your computer at
CommentaryMagazine.com.
LOG IN WITH YOUR
COMMENTARY MAGAZINE ID
Don't have a CommentaryMagazine.com log in?
CREATE A COMMENTARY
LOG IN ID
Enter you email address and password below. A confirmation email will be sent to the email address that you provide.