Computers & Poets
To the Editor:
William Barrett’s article, “Why Computers Can’t Be Poets” [April], shows a fundamental misunderstanding of Alan Turing’s famous essay, “Can a Machine Think?”
Mr. Barrett begins his argument with a discussion of the Turing test, an attempt (by Turing) to define an objective criterion to determine whether or not a machine exhibits intelligence. Turing felt that before he could discuss the question of whether or not a machine can think, he had to describe what “thinking” meant. In this way he could avoid a semantic argument and get right to the crux of the question at hand.
The Turing test, as Mr. Barrett accurately describes it, has an examiner in one room communicating with a computer and a human being in two other rooms; the examiner must decide which respondent is the computer and which is the human. If the examiner consistently fails to guess correctly, then the machine is exhibiting intelligence. This common ground was to be the basis (and is today) for discussing whether a machine can think. In arguing for the test, Turing presents a hypothetical conversation between the examiner and the computer, in which the computer defends the style of a poem it had written. Turing’s point is that if the machine were able to answer such questions, then regardless of how it managed to do so, we would have to admit that it was “thinking.”
Mr. Barrett at this point claims that Turing is not handling the question at issue, which is “the ability [of the computer] to produce the poem in the first place.” I am afraid, however, that it is Mr. Barrett who has missed the question and with it the whole point in this section of Turing’s essay. The point is that the Turing test is a reasonable definition of thinking. Turing does not conclude here, as Mr. Barrett implies, that the computer can write poems; instead, all he claims is that if it could write poems and then fool an examiner by talking about its use of style (i.e., pass the Turing test), then it would be exhibiting intelligence. I doubt that Mr. Barrett has any argument with this.
We can then move on to Mr. Barrett’s question, which is whether a machine will ever be able to pass such a test. . . . I refer the reader to Turing’s essay, which contains many compelling arguments in favor of the machine’s ability one day to pass the Turing test. The essay is reprinted in Volume IV of The World of Mathematics, a wonderful collection of mathematical literature, edited by James R. Newman (Simon and Schuster, 1956). It is unfortunate that Mr. Barrett does not discuss any of these arguments. . . . He does, however, correctly describe Turing’s belief that in fifty years (from 1945), when a machine would have as many memory cells as the brain has neurons (approximately 109), it would be able to pass the test. Unfortunately, even here he errs by writing 109 rather than 109. I assume that this is a typographical error rather than a conceptual misunderstanding.
We can let Turing’s essay speak for itself, so let us now consider Mr. Barrett’s own arguments against the possibility of the machine passing the Turing test. He writes: “The writing of a poem is not merely the combination of discrete units of language.” Among other things, it requires “creative sensitivity toward the living language” and a “unique historical sense.” He continues: “It is hard to see how one could install these qualities of mind in a machine.”
An excellent point. Of course it is hard. This is precisely why neurobiologists, psychologists, and computer scientists are spending their lives trying to solve this problem (or at least small pieces of it). The alternative is to give up trying to understand how we do what we do.
It is natural for us to feel that our own creative activities (writing poetry, composing music, or discovering scientific principles) must be more than a manipulation of discrete symbols. But if we suppress our self-centered pride for a moment, we can ask ourselves: is there not some underlying mechanical structure to our creative thoughts? Surely something mechanical (or at least electrochemical) happens in our brains.
If we admit this much, then the real problem, of course, is trying to discern the relationship between the electrochemical firings of our neurons and the finished poem. It is this problem that must be solved (not denied) before a computer can pass the Turing test. Unfortunately, this is quite a tall order. It is like trying to describe a hurricane by pointing to the paths of each molecule of water in the heavens. There may be a relationship, but we must put in a lot of effort to make it comprehensible. . . .
Mr. Barrett writes: “For a man whose mind had been continuously engaged with the question of how the machine might guide and regulate life, he [Turing] seems to have been sadly incapable of managing his own.” This is perhaps the most misinformed sentence in the article. Turing was engaged in the effort to bridge the gap between physical mind and creative thought, not in the technocratic preoccupation with how a machine might guide and regulate life. As to whether he was capable of managing his own life, I can only recommend to Mr. Barrett a recent biography by Andrew Hodges, entitled Alan Turing: The Enigma, . . . that gives an honest picture of the great yet tragic life of the scientist. . . . Whether Turing’s suicide was a brave response or a cowardly escape, his short life left the world some of the most important foundations of computer science.
To try to discern how we create is an honorable human endeavor; to deny the validity of this effort is vanity in its most dangerous form.
To the Editor:
William Barrett writes that the storage capacity of modern computers approaches 109 bits. Even a small personal computer has a capacity of at least a few hundred thousand bits. Evidently Mr. Barrett knows nothing about computers; if a computer were asked to write a poem about his article, we might expect to get something like this:
There was a professor named
Didn’t know a computer from
He was so out of his wits,
Thought the limit was 109 bits.
Barrett should go back to
his garret. . . .
Bronx, New York
To the Editor:
For someone as profoundly distrustful of computers as William Barrett, he has taken surprisingly little trouble to learn much about them, either technically or in terms of how they are currently being used in the arts, most notably in the visual arts and music. Shrewdly, he has confined his case against computers to poetry, a field in which, owing to the lack of a close and indispensable connection with technology, their application may indeed prove relatively unproductive even over the long term.
Let me then provide some of the rudimentary technical information Mr. Barrett left out, and then indicate how computer graphics, artificial intelligence, and other computer-related technologies and disciplines are being successfully applied to both creative art and research in art theory and aesthetics. I hope thus to make it easier to understand why those of us deeply involved with computers, despite the inevitable difficulties and frustrations associated with them, are unable to view them as gloomily as does Mr. Barrett.
For Mr. Barrett the basic—possibly the only—resource of a powerful computer is the size of its memory, the number of bits of information it is capable of storing. . . . The phrase “storage capacity” is used four times in the article, with no mention of speed as an equally important consideration in the operation of computers, or of how computer memory is intricately organized and allocated to different functions. There is not even a hint that Mr. Barrett knows the difference between read-only memory (ROM) and random-access memory (RAM)—a feature known to millions who own personal computers and to multitudes of bright children as well (not to mention the writers—including poets—who have taken to word processors and the use of thesaurus-like programs).
Linked to his notion of computers as nothing but memory and hardware is Mr. Barrett’s failure to consider software and the role of programmers in designing and coding this software. Finally, there is no mention of the contribution of those who commission, purchase, or otherwise use the software that makes the hardware work. Sensing, one presumes, that any mention of the role of human beings in the design, use, and improvement of computers would soften his picture of computers as profoundly non-human and anti-human, he leaves out the human factor entirely.
If a computer is programmed to write poetry, whether a superficial parody of the real thing or at the level of the great poets, the program is of the class known as a simulation. Programs of this kind are essentially an abstract representation—a model—of real-life phenomena. . . .
If the purpose of a program is to simulate everything involved in the process of creating great poetry, in a particular style, this “everything” must be identified and modeled, an accomplishment that may indeed prove impossible after centuries or thousands of years of dedication to the task. But a simulation need not be perfect to be useful, or to register an impressive performance in a field in which precise information—whether qualitative or quantitative—may be difficult to formulate, A computer, in sum, need not produce masterpieces to improve our understanding of art.
An example of how the computer is being used in the study of stylistics and formal structure in the visual arts is provided by Joan and Russell Kirsch, a husband-and-wife team of artist and computer scientist who have taken on the formidable task of preparing a program that analyzes and simulates the work of Richard Diebenkorn, one of the most important and respected contemporary painters. While this program—a kind of grammar pertaining to shape and the segmentation of the picture into large, well-proportioned areas—has yet to incorporate color, brush-work, and other important features of Diebenkorn’s work, it is a significant harbinger of a completely new approach to the analysis of style, visual syntax, and the principles of formal abstract design and composition.
While the simulation of the poems and paintings for research is only beginning, computer-graphic programs that encode and simulate styles of visual art have been written and used by artists for over twenty years—indeed, for over twenty-five years, since the roots of this approach go back to the German artists, aestheticians, and computer scientists who in the early 60′s founded the movement known as Generative Art. In this country, the group of artists most closely associated with this serial-robotic approach to computer art includes Manfred Mohn, Edward Zajac, Colette and Jeff Bangert, Harold Cohen, and myself.
Of this group the best known is Harold Cohen, a British artist associated with the University of California at San Diego since 1968. Apart from his reputation as an artist, Cohen has also become known and respected in the field of artificial intelligence for Aaron, a simulation-type program that was begun in 1972 and continues to evolve and demonstrate “talent” much as if it were a living artist. In this program, Cohen has basically created a model of himself as an artist; the program, as a quasi-autonomous agent, creates black-and-white drawings using felt-tip pens much as Cohen himself might do if he held the pens in his own hand. The drawings are made as a small drawing machine called a “turtle” slowly moves about on a large piece of paper or canvas laid on the floor. Or, five or more drawings are produced simultaneously by an array of plotters attached to a computer. Though Mr. Barrett claims that, unlike a human poet, a computer cannot continue to improve and change, Aaron has continued to improve and become more sophisticated as Cohen himself grows as an artist and continues to expand his program.
I realize that there is only a slight chance that the information supplied here will carry much weight with Mr. Barrett. For his grudge against the computer—as his book, Death of the Soul, clearly documents—is grounded in a sweeping devaluation of the role of science, technology, and rationalism in Western civilization over the last three or four centuries. On this premise he is justified in condemning computers and their application to any form of art, since they are indeed a product and extension of scientific rationalism and the cluster of values that both grow out of it and support it.
Department of Art
University of Massachusetts
To the Editor:
In “Why Computers Can’t Be Poets,” William Barrett writes of the professional and personal life of Alan Turing, the English mathematician and logician who cracked the Nazis’ cipher code, “thus freeing a good deal of Allied shipping from the menace of the U-boats” (and considerably advancing the date of the Allied victory).
Mr. Barrett writes that Turing’s life after the war “became rather beclouded and unhappy” because “he insisted on being an indiscreet homosexual and fell afoul of the authorities,” and that he killed himself in 1954 at the age of forty-one. These facts are true as far as they go, but by omitting the actual role played by the “authorities” in Turing’s suicide, this account might considerably mislead a reader.
In 1952 an ex-lover burglarized Turing’s house. Turing reported the incident to the police and, in the course of giving his deposition, revealed his homosexuality. This being a criminal offense in England at the time, the police were forced to take action. Douglas R. Hofstadter, reviewing Andrew Hodge’s Alan Turing: The Enigma in the November 13, 1983, New York Times Book Review, writes:
Turing was found guilty of homosexuality and sentenced to “treatment” rather than jail. Regular injections of female sex hormones were given to him to quell his sex drive. Turing did not want to try to use any of his connections in government or the academic world to mitigate his sentence and he simply endured it, growing breasts and being rendered impotent by the time the treatment was ended in a year.
This was the situation that caused Turing to take his own life in 1954 by eating an apple he had coated with cyanide. He tried to stage a suicide that would appear to be a laboratory accident to his mother, whom he wished to spare unnecessary grief.
Turing was an unoffending man whose genius saved countless lives. His death should not be blamed on his indiscretion but on the moral stupidity of those who drove him to suicide.
Carol F. Jochnowitz, George Jochnowitz
New York City
To the Editor:
I agree with William Barrett that computers can’t be poets. However, I think that while Mr. Barrett circles the reason this is so, he never quite hits it.
Computers cannot know anything we do not tell them. The fact is that we simply don’t know what makes a beautiful poem or a good poet. We can identify a good poem after it has been created and isolate various elements which make it good. But we cannot tell a computer all the elements which will make its poem a good one and also instruct it in how to integrate those elements.
The limitations on our ability to make computers perform specific tasks are ultimately the same as the limitations on our ability to understand the particular processes underlying those tasks. When human beings understand the creative process so totally that we can . . . make any intelligent, sensitive person a Shakespeare, then we will probably be able to make computers into Shakespeares too.
New York City
To the Editor:
As one who makes his living designing computerized information systems, I would like to commend William Barrett for “Why Computers Can’t be Poets.” Both popular and technical literature abound in silly references to “artificial intelligence.” Mr. Barrett has provided a much-needed corrective.
A computer is a wholly material object designed (by man, if it is not too churlish to mention) so as to operate as a mechanical processor of data. Given (by man) a set of data and a formally-defined rule to be applied to the data, a computer can perform the defined transformation. The question is whether human thought is entirely reducible to this mechanical execution of algorithmic procedures. The partisans of artificial intelligence answer yes, but, as Mr. Barrett points out, that assertion itself is a philosophic statement, which is to say, super-mechanical.
One might further ask the artificial intelligentsia (to use Joseph Weidenbaum’s apt label) how, if all thought is but the effusion of a uniform material process, it comes to pass that a materialist’s mind/ brain/computer could emit an opinion different from one issued by Mr. Barrett’s mind/brain/computer? They might answer that the Barrett machine suffers from erroneous input or faulty programming, but such an evaluation can only be made from a super-mechanical vantage point.
A human being can make evaluations which transcend the tautological realm of mechanical data transformations because he is not wholly material. He partakes of a super-material reality which enables him to discern meanings, appreciate beauty, apprehend truth, and make moral choices.
Daniel Love Glazer
William Barrett writes:
I had not expected the responses, either in volume or tone, to my piece, which seemed to me obvious and innocuous enough. But I realize that we live now in a cultural climate where, at least in some quarters, the computer has become something of a sacred icon not to be profaned by any suggestion of its limitations.
First, let me point out that the mistake so many of the correspondents noted—109 bits rather than 109—was indeed a typographical error. I should also make it quite plain at the start that I have nothing against the computer as a piece of technology—quite the contrary. What I find fault with are the ideologies and fantasies that have sprung up around this instrument. What we have is science fiction masquerading as science. In most cases it is old-fashioned mechanism masquerading under new colors. The confusion is all the worse because prevailing philosophies of mathematics and of mind have been far from satisfactory.
Perhaps my point could have been developed just as well in relation to mathematics as to poetry. (Poetry itself is rather a suspect item to some computer minds.) The development of mathematics depends on the free constructions of the human mind. This the computer cannot do. Is there a last prime number or not? If we had only the computer, we could set it grinding out larger and larger primes on and on, but getting nowhere. The mind of the mathematician, however, constructs a simple set that shows there can be no last prime. Or consider the far more audacious construction of the differential calculus. Now that the human mind has constructed it, the computer can be used in calculations with it. Let us not underestimate the essential creativity of the human mind.
To come back to poetry for one final point. As a reader of T.S. Eliot over the years, I have long since come to the conclusion that the body of Eliot’s poetry could only have been written by Eliot himself and nobody else. What now if we consider the assumption that this poetry could have been written by a computer? Well, the machine would have to be Eliot himself! I think I may now leave matters at this final absurdity.