To the Editor:
In his extended review of my book, The Singularity Is Near, Kevin Shapiro repeats objections to my thesis that I articulated and responded to in the book itself [“This Is Your Brain on Nanobots,” December 2005]. He also betrays a lack of knowledge about contemporary artificial intelligence (AI), genetics, and nanotechnology.
Mr. Shapiro writes that successful AI today consists of problem-solving through “trial and error.” This is simply mistaken. Most useful problems have such an enormous number of combinations of possible solutions that trial-and-error approaches are useless. Software programs that guide intelligent weapons, detect credit-card fraud, automatically place billions of dollars in investments, design jet-engine systems, or perform many other functions all work by emulating the key source of the human brain’s intelligence, namely, its ability to recognize patterns. In the same manner, a group at Google created automated English-to-Arabic and Arabic-to-English translators using self-organizing methods from extensive “Rosetta stone” texts. Their systems perform about as well as professional human translators. Interestingly, no one on the team spoke a word of Arabic.
Mr. Shapiro alludes to my discussion of acceleration in technological development, but he consistently displays linear thinking in his own prognostications. Citing a couple of my successful predictions, he says that “to be against Ray Kurzweil in 1990, one would have had to be an ostrich.” I can say that there were lots of ostriches back then, and that Mr. Shapiro is being one today. He writes that my “Law of Accelerating Returns,” by which information technology advances exponentially, “does not seem to apply to our basic understanding . . . of cognition” and that “advances in brain scanning have not yet translated into vastly increased knowledge about the human brain.” These statements are simply inconsistent with the doubling we now see each year in both the quantity and the precision of the information we are able to obtain about the brain’s structure and function at all levels (from cellular models to connection patterns). We also have seen steady, exponential gains in the resolution of brain scans.
Mr. Shapiro dismisses the successful computer simulation of brain regions as “little more than a theorist’s intuition of how the brain works, filtered through several layers of complicated programming.” But detailed psychoacoustic tests applied to the simulation of over a dozen regions of the auditory cortex obtained results very similar to those of the same tests applied to actual brain regions.
Mr. Shapiro frequently refers to things we do not yet understand (like the cause of Alzheimer’s) to make his case that the brain is so complex that we cannot hope to reverse-engineer it within the next several decades. To this objection I have two important responses that he fails to acknowledge. First, the brain, while certainly not a simple system, is not as complex as it appears. Take the cerebellum, where we do our skill formation. It comprises about half of the neurons in the brain, and appears to be vastly complex, with many billions of intricately wired networks. But the design of all this apparent complexity is relatively simple and controlled by only a few genes. We have a good understanding of the wiring pattern (although not yet of all the patterns of input and output), and researchers have created an effective simulation of the cerebellum.
Second, I describe the brain’s organization as a probabilistic fractal, meaning that its intricacies are expansions of simpler design rules contained in the genome. The genome contains only 30 to 100 million bytes of compressed information, yet the apparent complexity of the brain is a billion times greater. The genome design, while not simple, is at a level of complexity that we are capable of handling, especially when aided by increasingly powerful computers and sophisticated software models.
Mr. Shapiro derides computer scientists’ reliance on something as “intangible” as information, quoting H.L. Mencken to characterize this as dealing “with objects . . . far beyond the reach of the senses.” But that is exactly what the senses do: they turn the world into a flow of information. Our brains comprise information processes that recognize patterns and respond to them.
Fields of study like biology and brain science have pre-information and post-information eras. Mr. Shapiro cites Vioxx as a failure of biotechnology that (he believes) shows our imperfect understanding of biological processes, but Vioxx came from biology’s pre-information era, in which advances were hit-or-miss. Now that we are beginning to understand the information processes underlying biology, biotechnology is becoming subject to the Law of Accelerating Returns. To cite just one of many examples, it took us fifteen years to discover the genetic sequence of HIV, yet we sequenced SARS in only 31 days.
Brain science was also a hit-or-miss affair until recently. But now that we can actually view and model brain structures and interactions precisely, it is also becoming subject to the annual doubling of the power of information technologies. IBM is now building a detailed model and simulation of a substantial slice of the cerebral cortex, something that would have been unheard of only a few years ago. With high-resolution scanners only now emerging, the project to reverse-engineer the human brain is at an early point in what will be an exponential ascent. From this perspective, my expectation of gaining the principles of operation of human intelligence within a quarter-century is conservative.
To the Editor:
Kevin Shapiro’s “This Is Your Brain on Nanobots” is a valuable and important piece (one I intend to have my computer-science students read). I would like to add several comments.
Ray Kurzweil’s “Law of Accelerating Returns” is already running out of gas. The computer industry is moving to “clusters” (several processors in the same box) and other forms of “parallel computing” (many computers focused on the same problem, so they can solve it faster). This move is happening mainly because steady, year-by-year growth in the power of integrated circuits is already slowing, and no other technology is ready for action.
More important: Kurzweil’s idea that “we will soon have hardware that equals or exceeds that of the human brain” is naïve—for the reasons Shapiro gives, and for other reasons, too.
Kurzweil predicts that in 2020 you will be able to buy a brain’s-worth of computing for $1,000. But you cannot buy it today for any price, and no one has the vaguest idea how to build it.
Here is one of the central problems: without emotion, thought is impossible (as I argued in my 1994 book, The Muse in the Machine). Emotion becomes increasingly important as the mind moves down the “cognitive spectrum” from high alertness to the less alert state in which we are increasingly incapable of abstract thought—and finally to the least alert state of all, namely, sleep. Emotion is fundamental to “sleep thought,” otherwise known as dreaming. To put it another way: “analogy” is sometimes described as the main unsolved problem in cognitive science. But there is plenty of evidence to suggest that emotion is the main component of the mind’s mysterious capacity to invent new analogies. This capacity in turn underlies creativity; underlies “intuitive” versus “analytic” thought.
The all-important role of emotion means that Kurzweil’s thesis makes no sense. Emotions are not a matter of the brain only; they involve the body, too. You experience an emotion with your whole body. Because emotions require a body, human thought requires a body—or a reasonable facsimile.
It is possible to simulate certain brain functions on a computer. But no one claims that we know (or are likely to know anytime soon) how to build a fake human body in the laboratory—with skin that reacts to emotions the same way human skin does, and likewise internal organs, and so on. Computer hardware has become more and more sophisticated. But the “fake body” problem is not a computing problem. It cannot be solved with digital or any other kind of electronics. It is a hugely complex problem in materials engineering. It might be solved some day; but there is no reason a priori to believe that it will.
This brings up a related problem with Kurzweil’s thesis. He seems to take it for granted that if we simulate a whole brain in software, simulated thought and a simulated mind will emerge. It might or might not; we don’t know. We don’t know how the brain produces consciousness. Consciousness might result from the right sort of sophisticated information-processing, in which case we might one day use computers to produce simulated consciousness. But it might also require the exact materials (the exact molecular structures) we find in the brain. If you want an object with the exact properties of a brick, your only alternative is baked clay. You can simulate the structure of a brick on a computer, but you won’t be able to use the result to build a house. Consciousness might be like “brickness”; we don’t know.
For these and other reasons, I strongly agree with Mr. Shapiro. Kurzweil’s predictions might conceivably come true some day, but don’t hold your breath. If it ever happens, it won’t be soon.
New Haven, Connecticut
To the Editor:
Kevin Shapiro’s thinking on artificial intelligence (AI) is reinforced by Roger Penrose’s Shadows of the Mind, in which the author points out that AI is not just a long way off—it is impossible. He cites a dictum of Gödel’s to the effect that when mathematicians are doing original thinking they are using no known algorithms, only their intuitions—and one cannot teach intuition to a computer.
Harry E. Thayer
Kevin Shapiro writes:
In the marketplace of ideas, there are many ways to sell a theory. One is to present a few careful, meticulously reasoned arguments. Another is to present so many bad arguments that no one could possibly have the time or expertise to refute them all—creating the impression that the theory is logically irrefutable. I am afraid that Ray Kurzweil’s project is of the latter sort, and his letter here is representative. He claims that knowledge in brain science, life science, and information science is advancing rapidly enough that we will soon have the ability to replicate human intelligence in an artificial substrate. Unfortunately for him, the evidence he brings does not support this contention—not by a long shot.
To begin with, Mr. Kurzweil makes a fundamental error in assuming that the quantity and precision of information are directly related to its theoretical utility. Even if it is true that we have twice as much data about how the brain looks as we did last year, it is manifestly not the case that we have twice as good an understanding of how it works. The increasing “resolution of brain scans” to which Mr. Kurzweil refers is similarly meaningless—and not just because many of the higher-resolution modes cannot safely be used with humans. What matters is whether we can use available brain-scanning techniques to answer interesting questions about the nature of thought. At the moment, it seems that the questions currently receptive to brain imaging are quite limited.
Since my own research interests involve the neuroscience of higher mental functions, it would be rather masochistic of me to believe that the brain is so hopelessly complex that we can never understand it. Nor do I believe that artificial intelligence (AI) is impossible in principle; unlike Harry E. Thayer, I do not subscribe to Roger Penrose’s argument that consciousness cannot be understood within the framework of contemporary physics. As David Gelernter points out, the feasibility of AI is ultimately an empirical question, and it is not inconceivable that we will some day discover the basic elements of human thought.
On the other hand, I find little encouragement in the examples heralded by Mr. Kurzweil. He points us to the cerebellum, a structurally primitive part of the brain that is crucial for the maintenance of posture and the execution of fine movements. (Contrary to what he writes, it is not “where we do our skill formation,” though it probably plays some role in that process.) Compared to the cerebral cortex, generally thought to be the seat of cognition, the cerebellum is actually very simple. It comprises only a handful of neuronal cell types, all linked up rather neatly in a repeating pattern that was first described over 100 years ago. The most intriguing thing about the cerebellum is not the wiring pattern itself, but the patterns of input and output—in other words, its actual contribution to thought—and this, Mr. Kurzweil concedes, we largely do not know.
If computers are going to help us figure out the mysteries of the brain, we will have to know how to program them to do so. Mr. Gelernter notes correctly that this is a problem we barely know how to approach. So far, efforts at modeling different parts of the brain (like the cerebellum) have served only to confirm the plausibility of existing theories about their function, and have not revealed anything radically new. Likewise, research in AI has shown nothing more than that certain tasks performed by humans can also be performed by computers.
In conventional AI, the human programmer provides the computer with a set of inputs, a set of possible outputs, and a feedback mechanism that allows the computer to compare its mapping of inputs and outputs to some pre-defined target. The computer then runs through many cycles of input-output mappings, refining its procedure each time until a reasonable approximation of the target is achieved. This process has made possible an enormous number of practical applications, but whether one calls it “machine learning” or “trial and error,” it has not shed much light on the nature of human intelligence. Many of the abilities that define our species—like language—cannot be reduced to simple pattern recognition.
The Google machine-translation project is (perhaps ironically) an excellent illustration of this point. As Mr. Kurzweil indicates, Google’s program works by extracting statistical correlations between sets of texts that are fed to it: the more texts it analyzes, the bigger its effective vocabulary becomes. Yet even after having analyzed millions of texts, the system’s performance is mediocre at best: on a scale that rates machine translations from 0 to 1, where 1 is the statistical equivalent of a human translation, Google’s Arabic-English translator scores around .52. Moreover, the Google system has no capability to represent things like grammatical rules, so its translations bear no more than a superficial resemblance to human language. Compare this to the average five-year-old, who can acquire a perfect working knowledge of any number of languages after only a few months’ exposure to playground chatter.
Mr. Kurzweil’s comments about biotechnology are similarly misguided. He points to the quick sequencing of the SARS genome as evidence that biology is becoming subject to his “Law of Accelerating Returns,” but there was no exponential function at work there: the invention and automation of a process called the polymerase chain reaction in the late 1980’s and early 90’s made it cheap and easy to sequence nucleic acids. Even so, the knowledge gained from molecular genetics has not been easy to translate into treatments and therapies; sequencing the SARS genome has led to better tests to detect the virus, but not to any breakthroughs in treating or preventing it. So-called rational drug design has had only one notable success: Novartis’s Gleevec, which can arrest (but not cure) a specific form of leukemia. At the same time, “information era” pharmacology has produced its share of disasters, like Tysabri, a multiple-sclerosis drug that turned out to weaken immune defenses against a devastating brain infection.
Mr. Kurzweil’s argument is so riddled with factual and logical errors that it is difficult to take it seriously, and I thank David Gelernter for calling attention to several that I missed. The one great virtue of Mr. Kurzweil’s predictions is that they are time-specific: in 25 or so years, we will be able to judge whether he is right. For now, he has given us little reason to believe that he is.