An Historical Overview
All academics—natural scientists, social scientists, and human- ists—have traditionally made a point of avoiding IT, either by minimizing its significance as a mere “tool” or by uncritically embracing its real and imagined benefits. This is because IT chal-
lenges fundamental assumptions about the social value accorded to organized human inquiry and even “higher order” thought. As com- puters are increasingly integrated into society, it is no longer clear whether they should be regarded primarily as tools for human beings to acquire knowledge or as themselves exemplary bearers of knowl- edge against which to judge human performance. This point strikes at the heart of the entire social science enterprise, which not only aims to produce knowledge but also takes knowledge production as itself an object of study. Daniel Bell is only a relatively recent example of sociological enthusiasm for technology, which stretches back to the origins of the discipline, when its French founders Count Saint- Simon and Auguste Comte (also the first knowledge management gurus) made lavish claims for industrial technology’s abilities to restore the order that had been lost with the decline of the Church and the landed aristocracy.
However, one respect in which our situation differs from that of technology’s 19th-century sociological promoters is that the new information technology poses deep problems at the level of what philosophers call ontology, the essential nature of things. At risk now is the very possibility of social science. Specifically, the idea of a “social science” presupposes that people possess something that is by nature lacking in other things. The possession of higher mental capacities has traditionally given human beings their unique nature, yet these capacities are precisely the ones that computers are designed to simulate. So, it would seem that if computers literallythink, in the sense that airplanes literally fly, then social science has lost its raison d’etre. We would have to admit either that social science has no proper domain of inquiry (i.e., humanity has no essence) or that at least certain computers belong to that domain. Although either prospect seems drastic, both nowadays claim significant support.
On the one hand, Darwinian scruples have led biologists to doubt that certain traits (e.g., the possession of language) are exclusively the property of a single species, thereby subverting one historically important justification for a “science of man.” Thus, social science may be ultimately subsumed under either a “sociobiology” or a still more inclusive “science of the artificial” (Simon 1981) that would distinguish, at the physical level, between carbon-based and silicon- based forms of intelligence (i.e., animals versus computers). On the other hand, the past few decades have witnessed the blurring of once-intuitive ontological distinctions among “human,” “animal,”
and “machine.” Heightened environmental awareness, combined with more powerful computers sporting user-friendly interfaces and the advent of computationally generated life forms, have not only expanded the circle of beings accorded social standing, but in some cases, have even made it difficult to distinguish from among such beings. The image of the “cyborg,” a science-fictional creature melded of human, animal, and machine traits, symbolizes this per- spective (Haraway 1991). It is the second horn of the dilemma, I believe, that most directly challenges the future of social science.
The ambiguous sociological status of information technology is epitomized in the contrasting definitions that British and American intellectual property law give to the word “computer.” Whereas British law treats the word as metaphorically referring to a machine that enhances computational capacities that are seen as properly belonging to humans, American law regards the “computer” as a machine that literally performs computations, perhaps even ones that are beyond human capacities (Phillips and Firth 1995, 40–42). In British law computers are clearly treated as tools subservient to their human users, but in American law they are granted a more elevated status comparable to an agent or standard. This difference can be explored in depth with the help of a few distinctions, which are out- lined in Figure 3.2.
Computers can either provide a means of attaining knowledge or be themselves exemplary bearers of knowledge. In sociological terms, they figure in relationships that exhibit, respectively, functional and substantivemodes of rationality (Mannheim 1940). Such a technol- ogy can serve, on the one hand, as a tool for enhancing human performance or, on the other, as a standard for evaluating human
MODE OF PRINTED WORD ELECTRONIC
RATIONALITY WORD
STANDARD Substantive Sacred text General problem solver
AGENT Communicative Expert opinion Expert system TOOL Functional Popular literature Personal computers
Figure 3.2.
The social relationships of information technologies
performance. A rough-and-ready way of telling the difference be- tween these two modes is to look at what happens when the user and the technology are mismatched. Where does one place the blame—on the technology’s inadequacy to the user’s needs (func- tional) or on the user’s failure to grasp the technology’s design (substantive)? But there is also an intermediate stage. Here the information technology may have been designed as a tool, but the resources invested in both its design and use may make its users reluc- tant to evaluate the tool’s utility simply in terms of its ability to satisfy their immediate needs. Rather, the dissonance produced by the tool may be interpreted as much an opportunity for the users to reassess their own needs as to discard the tool. In that case, the user implic- itly treats the information technology as an agent with which it participates in what, following Habermas (1984), might be called communicative rationality.
However, technology’s shifting social status is not unique to our own information age but was decisive in the cultural transformation wrought by an information technology that predates the electronic medium by nearly half a millennium: printing. A glimpse at the shape of its history will help us understand the much more rapid changes associated with the computer.
Before the moveable type printing press, books were produced and interpreted under highly restricted conditions that required the medi- ation of monastic scribes whose work was licensed by the Church.
These conditions enabled the texts to function much more as stan- dards than mere tools. Indeed, the most widely transcribed book, the Bible, set the standard for both the conduct of the illiterate masses and the thought of the literate elite. However, moveable type changed all that by allowing for the large-scale production of relatively in- expensive texts that were often written, not in the Church tongue, Latin, but in the “vulgar” languages that readers could readily com- prehend. Market forces soon overwhelmed the Church’s efforts at regulating the contents of books, and the spread of literacy meant that books could be read silently to oneself rather than be read aloud by an ecclesiastical authority. Thus, every reader became a potential writer. The monastic middlemen had been eliminated. Moreover, the sheer proliferation of mutually contradictory texts—including several vulgar translations of the Bible itself—dissolved the pretense that one book contained all worthwhile knowledge (Eisenstein 1979). In the two centuries immediately following Gutenberg’s invention, Europe
was successively rocked by the recovery of ancient pagan authorities who challenged the scholasticism of the day (i.e., Renaissance Humanism) and the increased customization of Biblical interpreta- tion to believers’ needs (i.e., the Protestant Reformation).
However, the slide in the book’s social status from standard to tool was accompanied by a crisis of authority: Lacking a universal church to determine whose word to believe, was each reader simply left to his or her own devices (Febvre and Martin 1976)? The most histor- ically significant response to this problem has been the institution- alization of secular expert opinion in the form of scientific and professional communities, whose exclusive authority over a domain of knowledge is licensed by the state. Characteristic of this develop- ment are technical journals, access to which involves greater invest- ments of time and money (in training) than simply consulting the relevant expert in person. Although the fallibility of expert opinion belies any hope of restoring a universal authority, the status of expert communities as “lay-clerical” mediators of public opinion and action has not escaped the notice of their critics.
The history of the electronic word retraces much of the plot of the printed word, but in one-tenth the time and with a twist in the last act (Perrolle 1987, 143–146). The first generation of computers in the 1950s—the ones that have been immortalized in science-fiction films of the period—occupied large rooms, though they contained less memory and processing power than today’s average personal computer (PC). Back then computer programming was just as labor- intensive as manuscript transcription was before the printing press.
But instead of securing cool, dry places for storing manuscript parchment and maintaining a ready supply of ink (not to mention steady eyes and hands), the programmers faced the continual challenge of cooling the vacuum tubes operating the computer and rewiring the computer’s components whenever they wanted to change the program. Yet, it was also during this period that the most grandiose claims were made for “artificial intelligence” (AI) research producing an all-purpose machine, a “General Problem Solver”
(GPS) that could resolve any real-world problem into a set of if–then procedures. But as the diffusion of semiconductor technology made successive generations of computers more efficient, the aims of computer design became increasingly customized. The ideal of a GPS located in one self-sufficient machine at the behest of the expert user yielded to vast interlocked networks of PCs, in principle accessible
to anyone equipped with the requisite machinery. Even computer programming languages lost much of their forbidding mathematical character, thereby enabling many users to program their own machines.
In principle, the social space constituted by computerization is a free market for transacting information. In practice, however, the superabundance of information has made it increasingly difficult for users to find what they need in order to act decisively. This problem has two aspects, which shall be taken up sequentially in the rest of this chapter.
The first aspect of the problem is the radical prospect that com- puterization might commodify expertise, extending the processes of industrial automation from factory work to scientific and profes- sional work. This would open the door to the legal subsumption of more abstract and general forms of knowledge under intellectual property, capitalism’s last frontier. In short, post-industrial society may simply turn out to be the continuation of industrialism by other means. As we have already seen in the earlier sections of this chapter, the most direct route to this outcome is through the development of expert systems, computers whose design requires that the program- mer “reverse engineer” human expertise and then repackage it in a format that is user-friendly, economical, and perhaps even more reli- able than the simulated human expert. I shall consider this prospect in the next section.
The second aspect of the problem of mass computerization relates to the shift in computational ideals from the GPS to PCs whose
“power” is measured, not in terms of the information they physically contain, but in terms of their ability to access remote information sources. The result has been to place computer users at the mercy of whoever controls the computer networks. Traditionally, the state has been at the helm, but as more people log on, the mounting costs of network maintenance are making privatization an increasingly attractive proposition. The prospect that the “information super- highway” will evolve into a toll road and the putatively paperless electronic medium will become “pay-per-view” means that the new information technology may end up reinforcing divisions that already exist between the “haves” and “have nots” in society at large. In Section 5, I address this issue at length, in relation to the specific case of professional researchers, for whom there is no “free lunch” in cyberspace.