Equilibrium, Mental and Mathematical
Among mathematicians and theoretical physicists there can sometimes be a continuum of weird behavior that ranges from the profoundly eccentric to the truly mentally disturbed. To set the parameters, here are two cases: Wolfgang Pauli and Isaac Newton.
Pauli, who was born in Vienna in 1900 and died in Zurich in 1958, was one of the most distinguished theorists of quantum physics. He was also incredibly eccentric—identical, as J. Robert Oppenheimer once remarked, to his caricature. He had a massive head that bobbed rhythmically either up and down or back and forth, and he sometimes muttered to himself. Anyone who did not know who he was would have difficulty making sense of the scene I am about to describe.
At one point in 1957, rumors reached us at the Institute for Advanced Study in Princeton, where I was then a fellow, that Pauli and Werner Heisenberg had found a “Theory of Everything.” For Pauli to be smitten by such a theory meant that one had better look into it, since he was an acerbic and brilliant critic of scientific ideas. In January 1958, Pauli gave a lecture on his revelation at Columbia, and many of us came in to hear him. After the talk was over, Niels Bohr, then also at the Institute for Advanced Study, was asked if he would like to comment, but before Bohr could say anything Pauli volunteered that the theory might seem somewhat crazy. In fact, Bohr retorted, the trouble was that (unlike, presumably, quantum physics) it did not seem crazy enough.
By now, Bohr had gotten out of his seat in the front row and had begun to circulate around the long table before the blackboard on which Pauli had been writing. Pauli followed him. Each time one of them reached the front of the table, he would repeat his lines, Pauli protesting that the theory was crazy, Bohr responding that it was not crazy enough. It occurred to me even then that someone wandering in might be prompted to have these two titans of 20th-century physics committed to the nearest asylum. Of course, Pauli, to say nothing of Bohr, was not mad at all.
And Newton? I would imagine that, like Pauli, the young Newton, the Newton of the “miracle year” 1666, when he made most of his great discoveries, must have seemed profoundly eccentric. His amanuensis, one Humphrey Newton (no relation), noted that he had heard Newton laugh only once (when asked of what use the study of Euclid might be), and his doctor, Richard Mead, would later disclose to Voltaire that Newton had died a virgin. But around 1693, when he was in his early fifties, Newton had a true breakdown, complete with paranoid delusions that his friends were plotting against him. In a letter to the diarist Samuel Pepys later that year, he wrote he “neither ate nor slept well this twelve month, nor have my former consistency of mind.”
As this passage suggests, Newton was already recovered sufficiently to acknowledge that he had been ill. But only a few days afterward he would send to the philosopher John Locke one of the most remarkable documents in the entire Newtonian canon. Here is the pertinent part:
Being of the opinion that you endeavoured to embroil me with woemen & by other means I was so much affected with it as that when one told me you were sickly & would not live I answered twere better if you were dead. I desire you to forgive me this uncharitableness. . . . I beg your pardon also for saying or thinking that there was a designe to sell me an office, or to embroile me.
Between the cases of Pauli and Newton there is, as I say, a continuum, with many points along the way. The 19th-century mathematician Georg Cantor, to whom we owe much of our understanding of mathematical infinities, suffered, from the age of forty on, depressions that were so deep that he often spent time in sanitoria. More recently there is the example of Kurt Gödel, one of whose great contributions to mathematics, ironically, concerned a problem on which Cantor had worked unsuccessfully; Gödel’s mental state, for much of his adult life, was unstable, and as he grew older he became more paranoid, finally refusing medical help because he thought the doctors were plotting against him.1 And now we have the case of the mathematician and Nobel Prize-winner John Nash, still very much alive and the subject of a new biography, A Beautiful Mind, by Sylvia Nasar.2
Nash was born on June 13, 1928 in Bluefield, West Virginia. His father, John Sr., had studied electrical engineering and worked his entire professional life for the Appalachian Power Company, so John Jr. grew up in an environment congenial to science. Curiously for someone later so gifted in math, Nash showed no apparent precociousness. But at age thirteen or fourteen he read Men of Mathematics, a wonderful book for young people by Eric Temple Bell, and it generated in him the ambition to be the first person to prove Fermat’s Last Theorem. In his senior year in high school, Nash entered the Westing-house science competition and won a college scholarship. (From Nasar’s account it seems that his project was done jointly with his father; I was not aware that this was allowed.)
Nash used his scholarship to attend the Carnegie Institute of Technology in Pittsburgh, where he studied to become a chemical engineer and so impressed his professors that he was given his choice among the best graduate schools in the country. By this time, he had decided to become a mathematician. He selected Princeton mainly because it offered a somewhat larger stipend than Harvard, though the fact is that Princeton had a superb mathematics department of its own and was next door to the Institute for Advanced Study, whose faculty at the time included both Gödel and John von Neumann.
Nash arrived at Princeton in 1948, four years after von Neumann and the economist Oskar Morgenstern had published their seminal book, The Theory of Games and Economic Behavior. Game theory was probably a more active subject at Princeton than anywhere else at the time, and Nash became interested in it. Indeed, in 1950 he wrote his Ph.D. thesis—all 27 pages of it—in this field. What he said there is a subject to which I will return.
Although he had hoped to remain at Princeton, Nash was not offered a job; instead, he accepted an instructorship at MIT, then in the process of building up its mathematics department. By the time he moved to Cambridge, he had already spent a summer as a consultant at the RAND corporation, the defense think tank in Santa Monica, California. Two years later, while again consulting at RAND, Nash probably entered into his first homosexual relationship, but then, on his return to MIT in the fall, he began an affair with a somewhat older woman named Eleanor Stier.
Nash had no intention of marrying Eleanor, and he showed little interest in the son, John David Stier, who was born to them in 1953. In the summer of 1954 he was arrested for indecent exposure in a men’s bathroom in Santa Monica. This cost him his security clearance at RAND, and he never again worked for the government, though he kept his job at MIT and there were no other adverse consequences.
His sex life aside, Nash’s behavior was becoming noticeably peculiar. But the MIT faculty already boasted Norbert Wiener, one of the great eccentrics of all time, and Nash may have seemed relatively normal by comparison. Certainly Alicia Larde, one of the few coeds at MIT in the 1950′s, thought so when she fell in love with Nash and, after an erratic courtship—which included a very emotional encounter with Nash’s former mistress Eleanor Stier—married him in early 1957. What exactly she knew about Nash’s ambivalent sexuality—he had had another homosexual liaison in the summer of 1956—is not clear.
By 1959, Nash had crossed over the boundary from deep eccentricity to serious mental illness. For the next 30 years, he was in and out of mental institutions, diagnosed with paranoid schizophrenia, and for all practical purposes had disappeared from the world. But then, starting in the late 1980′s, he began to undergo a seemingly spontaneous remission.
For the past years he had been living once again in Princeton. Jobless, he led a kind of ghostly existence, leaving strange if nonthreatening messages on blackboards. Very few people knew who he was. But in the fall of 1989 a young Swedish economist named Jörgen Weibull paid a visit to the Princeton campus and enlisted the chairman of the mathematics department to help him persuade Nash to have lunch. The chairman may or may not have known the reason for the visit—Nash certainly did not.
Weibull was representing the Royal Swedish Academy of Sciences, and had been charged with finding out whether Nash’s mental health was sturdy enough to withstand the rigors of being awarded a Nobel Prize in economics. It seems that his 27-page Ph.D. thesis had become, by now, an essential element of mathematical economic analysis. In October 1994—Weibull having apparently decided that Nash was no stranger than other Princeton mathematicians he had encountered—it was announced that Nash was to share the Nobel Prize in economics with Reinhard Selten and John Harsanyi.
In reading Sylvia Nasar’s extraordinarily moving narrative, I found myself haunted. I am two years younger than Nash, but we seem to have traced many of the same routes. In 1951, when Nash arrived at MIT, I was finishing my senior year as a mathematics major at Harvard, and many of the people Nasar mentions were my teachers or colleagues. In particular, Marvin Minsky, one of the founders of artificial intelligence, had just returned to Harvard from Princeton, where he had been a contemporary of Nash.
Minsky, who is mentioned frequently in Nasar’s book, was one of those who never deserted Nash even in his darkest days. He recently told me that while at Princeton he had become stuck on a theorem he needed for his thesis; Nash gave him a suggestion that seemed to have nothing to do with the theorem but turned out to be the key to proving it. He also told me an amusing anecdote. Several of the graduate students in mathematics used to frequent a restaurant in Princeton. When the service was really bad, Nash would wait until the pooled tip money was on the table and then pocket some of it. He called this maneuver a “negative tip.” “But that’s our money!” his companions would protest. “No it isn’t,” Nash would reply. “It’s her money now.” Forty years later, when the Nobel Prize was awarded in economics, Minsky decided that Nash’s was the correct position.
The mathematician George Mackey, who used to visit Nash after he was committed to McLean Hospital outside Boston in 1959, was one of my first great teachers. According to Nasar, Mackey once asked Nash how he could believe that extraterrestrials were sending him messages—to which Nash replied, “Because the ideas I had about supernatural beings came to me the same way that my mathematical ideas did. So I took them seriously.” Still another figure in Nasar’s story is Richard Palais, a professor of mathematics at Brandeis who found Nash a temporary job there in 1965; he was one of my roommates in graduate school.
It is to Nasar’s credit that she was able to interview so many of the mathematicians who knew Nash at various times in his life. But it is a pity she did not get one of them to review her manuscript. As an economics correspondent for the New York Times, she is qualified to write about that side of Nash, but mathematics is something else again. Let me cite two examples, not very important in themselves but symptomatic of a serious defect in this book.
In describing the ambiance at MIT when Nash was first getting to know his future wife Alicia, Nasar quotes a woman named Emma Duchane who was a physics major at the time. Duchane: “We wanted intellectual thrills. When my boyfriend told me e equals I to the PI minus 1, I was thrilled. I felt the absolute joy of the idea.” Insofar as I can make any sense of this statement—the first I should presumably be i, the square root of -1, and PI presumably stands for π—it is false. Perhaps Duchane’s boyfriend was putting her on, or perhaps he told her the correct formula—“e to the (π times i) = -1”—and Nasar mistranscribed it. In either case, the formula happens to encapsulate some of the most fundamental ideas in mathematics.
Equally false is the statement a few pages later that, “in formulating his special theory of relativity,” Einstein employed the non-Euclidean geometry of the 19th-century German mathematician Bernhard Riemann. The special theory of relativity, which Einstein created in 1905, has nothing to do with Riemann’s geometry; it takes place in a flat Euclidean world. Only in the general theory of relativity and gravitation, which Einstein published in 1916, does non-Euclidean geometry become relevant.
These mathematical howlers show an unfamiliarity with Nash’s intellectual world that is significant for Nasar’s thesis in A Beautiful Mind. Mathematicians are assuredly very intelligent, especially if intelligence is defined as that which mathematicians do well—namely, abstract thought. If one is not accustomed to their mental universe, however, it is easy to get the impression that every mathematician is a “genius.” That is what seems to have happened with Nasar: everyone she writes about appears to be a genius. The result is that the whole notion of genius becomes dumbed-down.
This makes it all the more difficult than it already is to classify John Nash as a mathematician. For all intents and purposes, his career in the field stopped when he was thirty, and the last thing he was working on—a conjecture in number theory by Riemann—has, to this day, not been proved. From Nasar’s account, it would appear that Nash’s breakdown was beginning to make itself mani-fest in his almost desperate approach to this problem. True, the work he had done in the brief period of his active research—during his whole career he published only some twenty papers—was truly first-rate; but how can you compare it with what a Gödel or a von Neumann managed to do even before they were thirty?
But then there is the other side of Nash. In preparing this piece, I did something I thought I would never do in my life: I bought a textbook in economics. It is called Game Theory with Economic Applications, by H. Scott Bierman and Luis Fernandez, and it was published in 1993, the year before Nash won his Nobel Prize. I wanted to see if what Nash had done in his Ph.D. thesis had indeed entered the mainstream of economic theory, and I also wanted a somewhat more technical and more detached account than the one Nasar offers in her book.
One thing for sure is that Nash’s work has become a textbook subject. In fact, my textbook has two long and pivotal chapters called “Nash Equilibrium I” and “Nash Equilibrium II.” I also note that in the 1965 edition of the Enyclopaedia Britannica, the excellent entry on game theory includes a paragraph on Nash equilibrium—making it clear that, while Nash himself had vanished by that year, his work was very much in the air.
Let me now attempt, with many apologies to the experts, a brief explanation of what Nash did. The analysis of strategies for playing games—at least games whose rules can be formalized mathematically—goes back a long way. But modern theory starts with the work that von Neumann first embarked upon in the late 1920′s. His analysis dealt basically with two-person, zero-sum, noncooperative games that are subject to clearly specified rules. Zero-sum means that one player’s loss is necessarily the other’s gain; noncooperative means that the players do not enter into binding agreements with each other before the game starts.
The strategy of a noncooperative game evolves as a function of what happens while the game is being played. Von Neumann proved what is known as a “minimax theorem.” Suppose there are two players, Mary and Tom—my textbook likes the names Mary and Tom—and they are playing such a game. Mary wants to choose a strategy that will maximize her gain, taking into account the fact that Tom is smart enough to see through this strategy and to counter with one of his own designed to minimize Mary’s gain. So Mary will look for a strategy that will minimize the damage that Tom can do—or, in other words, that will maximize Tom’s chances of doing minimum damage. Conversely, Tom will look for a strategy that will minimize the losses that Mary can inflict.
What von Neumann showed was that for this class of games, such a strategy must exist. (Finding it in any given game is another matter entirely.) It was this theorem that became the basis of von Neumann and Morgenstern’s 1944 book. Not much more progress was made in game theory until 1950, when Nash entered the picture.
Part of Nash’s contribution was to allow one to relax the assumptions of von Neumann’s theorem; the game does not have to be zero-sum or involve only two players. But he went beyond that to change the paradigm altogether. Let me revert to an example from my textbook. In a given community there are two automobile dealers—the textbook calls them Honest Ava and Trusty Rusty—each selling the same kind of automobile. Between them, they have a lock on the market. But they are not allowed to cooperate. Suppose, nevertheless, that they want to set prices.
In this “game,” the sellers have three possible price levels to choose from: high, medium, and low. At what level should the prices be set so as to produce a situation that will both maximize each seller’s profits and be “stable”—that is, so that the prices, once having been set, will not have to be immediately reset in response to some defensive action on the part of the other seller? If there is such a strategy, it is called a Nash equilibrium, and if a Nash equilibrium is reached, then everyone will have done the best that can be done.
Crucial to this discussion is that each competitor act in a rational way to maximize his or her benefit. But remember that Ava and Rusty cannot collude; if they could, both would set their prices as high as possible and the story would be over. Here, in my reconstruction, is how Ava would reason: “If I set my price high and Rusty acts rationally, he will lower his price to medium. But in turn he will reason that I will act rationally, and set my price low in response to him. Hence, setting my price high will not lead to an equilibrium.” If one follows Ava—and Rusty, too—through the rest of the reasoning procedure, one sees that the only equilibrium solution is for both of them to set their prices low from the beginning: a wonderful arrangement, incidentally, for the consumer.
This was Nash’s basic insight. What he showed was that in a very wide range of such “games,” there must be at least one such strategy leading to equilibrium, and if there are several, one must decide among them. His theorem is nonconstructive in the same sense as von Neumann’s—that is, it tells us that an equilibrium strategy exists, but not how to find one. Perhaps that is why, according to Sylvia Nasar, von Neumann was dismissive when Nash went to him with his result.
But it is at least as likely that, six years after publication of his work with Morgenstern, von Neumann had lost interest in the subject. Indeed, by 1950 von Neumann had become deeply involved in creating the first modern electronic computers. Today, we have both the computer and Nash’s equilibrium theory, and both are used routinely by economists.
To me, one of the most fascinating parts of Nasar’s book concerns the machinations that went into securing Nash’s Nobel award. I have some first-hand experience at this kind of investigation, having once tried to trace the story of the 1944 solo prize to Otto Hahn for the discovery of fission. What was curious here was that Hahn’s work had been done in collaboration with Fritz Strassmann, and correctly interpreted by Lise Meitner and Otto Frisch, yet none of these other figures was recognized. Despite the lapse of the 50-year rule of silence surrounding the Nobel committee’s proceedings, I got nowhere. But Nasar has the whole messy tale.
As it happens, the Nobel Prize in economics is financed not by the Nobel Foundation but by the Central Bank of Sweden. In other words, the money does not come out of Alfred Nobel’s will. But the prize is nevertheless administered by the Royal Swedish Academy of Sciences and the Nobel Foundation. By 1994, a good deal of opposition had developed to the idea of a “Nobel” in economics altogether, with some scientists feeling it diluted the significance of their own prizes. This became part of the politics behind the award. The other, countervailing, part was that 1994 marked the 50th anniversary of von Neumann and Morgenstern’s book, and was hence an auspicious date for bestowing a prize in game theory.
Then there was the matter of Nash himself. Not only had he done nothing in the field for 40 years, but, as one of the committee members argued, he had been a consultant at the RAND corporation, and they worked on bombs there. (By this criterion, a great many physicists, from Richard Feynman on down, should have been denied the prize.) Nash’s history of mental instability was another source of concern, and various alternative candidates were suggested even after Jörgen Weibull had vouched for him. In the end, Nash was chosen, but by a narrow vote. What the prize meant to him can only be imagined.
But this brings me to a final topic: Nash’s recovery. To Dr. Oliver Sacks, the well-known specialist in mental disorder with whom I was able to have a brief conversation about Nash’s case, what was crucial was that after his breakdown, Nash enjoyed a supportive environment. (It is also fortunate, I think, that during his hospital stays he was not subjected to electric-shock treatments, then still at a relatively unadvanced stage of development.)
Nash and his wife Alicia were divorced in 1962, three years after his commitment to McLean Hospital. He spent much of the following two years in the Carrier Clinic in New Jersey. After another six years of desultory wandering (including his failed stint teaching at Brandeis), he returned to Princeton in 1970. By then he had nowhere else to go. Motivated by pity, love, and perhaps hope, his ex-wife let him move back in with her, and to this day he still lives with her although they have never remarried. John Charles Nash, the son to whom Alicia had given birth in 1959, was also part of the household that Nash returned to in 1970.
This was when Nash first began haunting the Princeton campus. Here again he was fortunate. No one harassed him, and he was even given free access to the Princeton mainframe computer. At first he seemed to be using it to continue the rather crazy numerology he had been engaged in—“making names out of numbers and being worried by what he found,” as Hal Trotter, who saw Nash nearly every day at the computer center, has put it to Nasar. But then “gradually,” according to Trotter, “that went away,” and what followed “was more mathematical numerology. Playing with formulas and factoring. It wasn’t coherent math research, but it had lost its bizarre quality. Later it was real research.” Nash was getting well.
No one can say whether his present remission is permanent; one can only hope so. But what is almost unbearably sad is that his son John Charles, brilliant in his own right, has inherited the same sort of mental illness that beset his father and has already had it for a larger fraction of his life. Most of Nash’s energies are now taken up trying to help his crippled son.
1 See my article, “Gödel’s Universe,” in the September 1997 COMMENTARY.
2 Simon & Schuster, 459 pp., $25.00.