Turing's Man, by J. David Bolter; The Second Self, by Sherry Turkle
Computers and Us
by J. David Bolter.
University of North Carolina Press. 264 pp. $19.95.
The Second Self.
by Sherry Turkle.
Simon & Schuster. 362 pp. $17.95.
Despite the apocalyptic expectations aroused by the ever more awesome data-handling capabilities of electronic computers since the first ponderous models were constructed around the time of World War II, the effects of computers on our everyday life still remain far less visible than those produced by many earlier and more rapidly domesticated technologies of the past century. Forty years after their invention, the telephone, the automobile, radio and television, not to mention the humble electric light bulb, had all brought about much more radical transformations in the way people communicated, the geography of their cities, their use of leisure time, and indeed in the whole texture of modern life, than has the computer.
The first few generations of computers did of course change the mechanics of many vital commercial activities, from inventory management to customer billing, and did increase the rapidity with which many kinds of information, from airline schedules to census data, could be stored, manipulated, and disseminated. But the relationship between the computers themselves and the population at large, even that segment that depended on their output, remained quite remote. By the 1970′s, however, a growing number of users could communicate with computers by entering requests and receiving responses instantly through terminals. More recently, a rapidly evolving technology has multiplied the capabilities and shrunk the size of microelectronic logic and memory devices, so that today, hands-on contact with some offshoot of computer technology—mostly through such special-purpose machines as self-service bank tellers, electronic cash registers, word processors, and video games—is virtually unavoidable.
The culmination of this trend has occurred with the development of the personal computer and its less sophisticated sibling, the home computer. These microcomputers, smaller in size but with greater capabilities than mainframes of two decades ago, have found their chief justification in business applications, allowing executives to perform large numbers of financial and other calculations speedily and independently of their companies’ central data-processing departments, and professionals and small businessmen to handle medium-sized billing and accounting functions. The other uses to which they can be put, most notably the playing of games, have led in many cases to a desire to learn more about programming than the rote skills required by the prepackaged software with which the computers come equipped.
As the nature and use of computers have changed, so has the focus of interest of those concerned with assaying their social impact. We have not heard much lately of the once-familiar fear that large computers were a danger to society because of their potential for consolidating huge amounts of data on the private lives of citizens. (Indeed, what limited steps have been taken in this general direction, as in the matching of social-security numbers of government employees with those of recipients of welfare, have been greeted with approbation.) Today’s writings seek instead to explore how the growing interaction between individuals and computers will affect the human sensibility. Not surprisingly, the conclusions drawn generally tell us a good deal about the sensibilities of their authors.
In Turing’s Man, J. David Bolter, a classicist with a graduate degree in computer science, finds in the use of computers a partial return to the modes of thought of the classical era, whose harmonious world of human-centered craftsmanship was almost destroyed by the successive onslaughts of medieval religious philosophy, Enlightenment scientific determinism, and modern industrialism. According to Bolter, the computer, by its constant reminder of the finite availability of time and memory-space, imposes on its user a sense of the limits of human progress and worldly endeavor just when people are most in need of such a sense.
On the way to making this case, Bolter offers a number of useful insights into the nature of language, the relationship between technology and civilization, and similar issues. What could have been a modestly successful essay in this regard, however, is spoiled by his eagerness to tell whatever he has learned about computers, resorting on occasion to semi-incomprehensible diagrams, and to press upon us his jejune version of various fashionable concerns of the 1970′s. His suggestion that the computer will make us aware of the need to match our activities to our resources is startling in its naiveté. One would expect even a classicist to know that this is the central point of the major discipline of economics, apart from being a basic fact of life.
Sherry Turkle is a psychologist, and her book, The Second Self, treats the computer as a psychological probe with which to elicit from its users their beliefs concerning the nature of mind, the difference between human beings and machines, and the relationship between thinking and feeling. Her observations of elementary-school children, adolescents, college students, computer “hackers,” and veteran researchers in the field of artificial intelligence lead her to conclude that as computers take on an increasing prominence in the everyday life of society, more and more of us will paradoxically come to learn that it is not the ability to think that distinguishes the human race, but those emotional qualities that cannot be expressed in rational terms. It is disappointing—though perhaps inevitable given the age distribution of those she interviewed—that most of the thoughts recorded by Turkle along these lines are childish, sophomoric, or myopic. Turkle herself, however, reflects perceptively on the way in which the development of machines that can perform mental as well as physical tasks is likely to lead to a diffusion into common discourse of new psychological theories, much along the lines that vulgarized Freudian theory did earlier.
In their desire to discover how the growing role of computers will affect general attitudes, both Turkle and Bolter have felt it necessary to enter the world of those most closely involved with today’s computers. The major problem with this approach is its assumption that what is true of today’s machines and users will essentially remain true when almost everyone has ready access to computers with capabilities much greater than today’s. But consider the degree of technical involvement needed by the early pioneers of flight with their primitive aircraft as compared with today’s typical airline passenger in the vastly more sophisticated airliner in which he flies; compare the 1920′s radio amateur and his crystal set with today’s listener who merely pushes buttons on his AM-FM stereo car radio as he drives. So too, as today’s computer professional, be he obsessed “hacker” or IBM engineer, spends hours increasing the internal complexity of his machine’s hardware or software, the involvement that will be required of tomorrow’s casual user is correspondingly diminished. If the computers of the future are to be as important to life as is generally assumed, both the amount of effort required to use them and the philosophical “weight” attached to their use will be small indeed.
Even today, most involvement with computers is utilitarian in motive, from the electrical-engineering student who sees a lucrative career opportunity in the burgeoning high-tech sector, to the typist (or editor) who needs to acquire-word-processing skills, to the financial analyst who needs to be able to run an electronic spread sheet. For some fraction of people, the mechanics of the new skill will become fascinating and they will spend an increasing amount of time developing it; others will learn only as much as they need to; still others will find the whole exercise so distasteful that they will try to avoid it altogether.
The first group is, naturally, the one whom most of today’s writers focus on, but the second and third are probably larger. And even among those who do become seriously interested, the period of intense emotional involvement is likely to be brief, in the familiar pattern of adolescent infatuation. The first mass involvement of people and computers, the video-game craze of the early 1980′s, produced a widespread and wholly disproportionate response in which there was much bandying about of such terms as depersonalization, alienation, and other familiar bugbears of pop sociology; but the phenomenon itself faded away, like a thousand other passing fads which were similarly decried in their time
If only because computers are themselves so diverse, no generalization can describe how they will affect either their users or society in general. What they all have in common, from the massive number-crunching supercomputers used for research in geophysics and meteorology (and weapons simulation) to the microprocessors that simultaneously minimize the fuel consumption and the pollutant emissions of the newest automobile engines, is the astonishing ability to perform well-defined tasks repetitively, accurately, and above all rapidly, combined with an obedience, stolidity, and frustrating literal-mindedness totally alien to the human psyche. Those tasks which a computer is instructed to do logically and comprehensively, it can do amazingly well; but any ambiguity or inconsistency will lead in short order to failure. When people know how to solve a problem and can formulate the necessary steps clearly, the computer can be programmed to do the job, as has been demonstrated by the so-called expert systems of the artificial-intelligence industry, which have achieved a measure of success in such areas as routine medical diagnosis. But where there is a need for intuition, creativity, or even common sense in the formulation or solution of a new kind of problem, a human novice is better than the most expert computer.
The uses to which today’s computers can be put, then, are limited to those which can be taught them by humans. The musings of computer scientists about computers that will be able to teach themselves are not likely to be translated soon, if ever, into real machines whose workings resemble those of the human mind. Computers have indeed been programmed to play chess successfully, and have reached a level at which they can defeat most skilled players, but the style in which the machines play, inexorable, clumsy, but blunder-proof, does not at all resemble that of human players. Even in this highly structured and strictly rule-determined application, programmers have been unable to mimic the unknown way in which the human mind actually tackles problems. Some thirty-five years ago, Allan Turing provided the functional definition of an intelligent machine: one which, placed in a closed room and fed a series of random questions, would be capable of supplying answers not essentially different from those which might be expected from a human. Highly circumscribed though this function is, it is far beyond the capabilities of any computer yet built.
Needless to say, philosophers both amateur and professional have had a field day arguing whether it would be appropriate to describe such a machine, were it possible to construct one, as thinking in the same way as a human. But given the gap between this purely hypothetical intelligence and any computer anyone actually knows how to build, the question is metaphysical and practically irrelevant. Meanwhile, as today’s more prosaic computers prove able to take up a growing number of tasks to which they are better suited than humans, their users, enthusiastic or reluctant, frequently characterize them in anthropomorphic terms; but this oft-noted fact has less to do with the peculiar nature of computers than with a general human propensity reflected in the naming of such undeniably nonhuman objects as pets, vehicles, and hurricanes.
Significant new technologies do often result in changes in human perception, as is noted by both Bolter and Turkle, but the underlying reality that is perceived is not altered. In this respect, perhaps the most valuable contribution of the computer will be as a source of fruitful new analogies in regarding old problems. In the 19th century, the great moral thinker Rabbi Israel Salanter drew three lessons from the most visible new technology of that era, the railroad: a single instant of delay is enough to miss the train; the slightest deviation from the tracks means disaster; and whoever rides without a ticket will eventually have to pay. From the computer one can draw an analogous set of lessons; the more memory, the better; without logic, memory is useless; and to achieve a result, you need a good program.