5. Why Even Scholars Don’t Get a Free Lunch in Cyberspace
5.1. A Tale of Two Technophilosophies
Cyberplatonism versus Cybermaterialism
Let me start with some scholarly name-calling that resonates with the central issues raised in the previous chapter. To do justice to Harnad’s historical roots, I shall call his position: Cyberplatonism.
The Platonist’s Holy Grail is the frictionless medium of thought that can transcend time and space to get at The Truth. The Cyberplaton- ist believes he or she has found the Grail in the Internet. There are two issues that need to be teased out before we go any further. First, does the Internet, indeed, approximate the frictionless medium of thought sought by Platonists? Here I use “frictionless” as shorthand for the qualities possessed by the desired medium: i.e., relatively
direct and reliable transmission at low cost. Second, do we have an adequate model of how such a medium would enable inquirers to get at The Truth? Harnad simply assumes this to be the case, which leads him to run the two questions together. The model he has in mind is the peer-review system, whereby only qualified members in a field of inquiry evaluate the work proposed for inclusion in the field’s body of knowledge and public dissemination.
As we shall see, peer review presupposes material conditions that render it no more than a Virtual Cyberplatonism. Nevertheless, according to Harnad, only the presence of paper-consuming inter- mediaries—the publishing houses—prevents this system from being fully institutionalized and thereby unleashing an era of untrammeled inquiry. This second question may turn out to be more important than the first, especially if academics and other professional knowl- edge producers remain personally insulated from the costs of main- taining and extending electronic communications. The Achilles’ heel of all forms of Platonism is an obliviousness to the material condi- tions of thought, and Cyberplatonism is no different. The Internet is hardly the frictionless medium of thought Cyberplatonists make it out to be, and more importantly, even if it were, it does not follow that the interests of inquiry would be best served by colonizing it for the peer-review system.
Generally speaking, Cyberplatonists can be found lurking behind any claim that a cognitive or communicative medium enables an
“overall saving” in effort or expense. By contrast, my own position on these issues is that of the Cybermaterialist, one who does not believe that the search for a frictionless medium of thought is intelligible. Instead, what happens is that one form of friction is exchanged for another, as we pass from one medium to another. In more concrete terms, the costs are merely shifted around, sometimes from one aspect of our lives to another, sometimes from one part of society to another. Of course, a big problem with assessing the exact costs and benefits is that by the time the medium has become institutionalized, people’s lives and values will have adapted to it, so that even those who have limited access to the new medium will have a hard time imagining what life could be like without it. Of all aspects of human history, the history of technology is the one that cannot seem to shake off the Orwellian tendency of rewriting the present to make it look like straight-ahead progress from the past.
To counteract this tendency, we have the Cybermaterialist’s
heuristic: Whenever presented with the prospect of a technological system that provides an “overall saving” in effort or expense, look for what lies outside the system, because that is probably where the costs are being borne.
The most straightforward way to interpret the Cybermaterialist imperative is in terms of the economist’s concept of externality. Con- sider the relatively simple case of two media—print and electronic—
whose general virtues trade off against each other. A convenient example is Harnad’s own interspersal of his response to my critique of his position in Kling (1995), a familiar feature of electronic exchanges. But contraHarnad, such “hypertexting” is not an inno- vation, but a mere efficiency, of electronic communications. Interlin- ear (and marginal) commentary to an authoritative text is a practice that reaches back at least to the 12th century. Back then, manuscripts were written with wide margins and interlinear spaces to permit the insertion of the scholastic reader’s notes, objections, and (per the original meaning of “inquisition”) examination answers (Hoskin and Macve 1986). And like electronic hypertext today, as manuscripts were copied and passed on to other scholastics, the comments would often be incorporated into the main body of the text, eventually making it difficult to disentangle exactly who said what. Credit, when assignable, would typically go to the person who assembled the most interesting array of texts, leaving aside issues of original authorship.
This medieval practice declined with the introduction of the printing press (McLuhan 1962, Ong 1962). Printing enabled the production of texts that remained invariant as they acquired portability. The invariance resulted not only from the reliability of the printing process, but more importantly from the asymmetry that printing created between authors and readers: Authors appeared in print, while readers were forced to scribble in ever diminishing marginal and interlinear spaces. As a result, it became much easier to assign authorship to a text, and for that assignment to be associated with a proprietary right over a determinate object.
Although I personally welcome the reinvention of medieval hyper- textual practices in the electronic medium, they would wreak havoc on the credit allocation schemes that currently operate in the aca- demic world—the very schemes that receive Harnad’s enthusiastic support—as virtually all of these depend crucially on the key Guten- berg practice of assigning authorship to text. As we shall see, the legal struggles over defining the Internet suggest that Gutenberg notions
of authorship do not sit well with post- (or pre-) Gutenberg notions of hypertextuality. The emerging legal persona of the “infopreneur”
seems to owe more to the 12th-century compiler–encyclopedist than to the 19th-century genius–author. It may be that the more we insist on the transformative powers of the electronic medium, the more we unwittingly enable the dissolution of institutions such as authorship around which the peer-review process and other mechanisms of credit allocation in academia revolve. Not being a technological determin- ist myself, I would not argue that this is a necessary consequence, but its probability is sufficiently high to raise concerns that as we wax “Post-Gutenberg,” we do not, at the same time, remain “Pre- McLuhan” in our understanding of technology’s potential to shape thought. However, a charitable way of interpreting Harnad’s desire for some peer-reviewed channels on the Internet may be that he wishes to simulate in the electronic medium some of the virtues that emerged from the print medium, especially a stable text to which authorship can be readily assigned. In that case, it will be interesting to see just how much more regulation ultimately needs to be intro- duced into peer-reviewed cyberspace so that the integrity of this highly artificial form of communication is maintained (Kahin 1996).
At a more general level, the transition from print to electronic media incurs externalities that accompany the constitution of any social order: How can fallible agents be arranged so that their col- lective capacity is more, not less, than the sum of their individual capacities? This problem is harder to solve than it may first seem because people are especially good at manufacturing scarcity, both at an object level and a meta-level. In other words, even when people can get what they want, that usually means that what they get is worth less than they thought. In more formal terms, there are two general ways in which the collective capacity of society can be under- mined:
(a) If, by either ignorance or design, everybody interferes with each other, so that only some, if any, of them are able to get what they want.
(b) If, by virtue of everyone getting what they want, they unwittingly diminish the value of what they have gotten.
(a) and (b) represent the two kinds of scarcity: (a) an object-level scarcity; (b) a meta-scarcity.
A new technology introduces new opportunities for scarcity, and the Internet is no exception—a point duly noted by Hal Varian, the economist who has probably thought the most about alternative pricing schemes for the Internet (Shapiro and Varian 1998). I will consider only the case of (a) here, though (b) becomes increasingly important once information becomes seemingly “superabundant.”
Because the Internet involves a “packet-switching” technology, bits of messages from many different sources are transmitted through the same channel at a given time. This enables the channel to become congested, leading to the delay or deletion of transmissions. More- over, it has proved difficult to regulate congestion because of the potential disparity in the size of transmitted messages (especially when advanced video or audio messages are involved) and the het- erogeneity of their sources, as well as the ease with which periods of peak usage shift. Voluntary measures do not seem to work, yet gov- ernments appear inclined to privatize, or at least decentralize, what- ever control they currently have over the Internet. Nevertheless, historically the only reliable way to prevent the introduction of a new technology from redrawing and sharpening already existing class divisions in society has been government regulation. Clearly, then, we are heading for a crisis in cost accounting for the Internet.
The failure by governments to anticipate the problems of scarcity associated with the Internet partly reflects its secretive roots in Cold War concerns about America’s ability to respond to a nuclear first strike. To beef up its communication networks, the U.S. Department of Defense drew upon some work then being done at MIT on resource sharing between computers. From this came the idea of col- laboration among different computer user communities. The proto- type of the Internet, ARPANET, was thus launched in 1969 to connect Defense Department researchers working all across America.
No one at the time had expected that the network would colonize conventional forms of communication. Given this historical back- ground, it would be a mistake to think that the future of the Inter- net will be resolved by “discovering” what the Internet “really is”
and then applying some appropriate legal regime for its cost account- ing. Rather, parties with an interest in the future of the medium are now at various bargaining tables proposing that the medium be thought of as, say, a toll highway, a cable television system, a tele- phone, a radio network, etc.—all in the service of advancing certain pricing schemes that will benefit their respective constituencies.
Those with a sincere interest in making the Internet “the great equalizer” would spend their time wisely by participating in the discussions already underway with the representatives of the information and communication conglomerates, corporate lawyers, government regulators, and high-level university administrators, in whose hands the future disposition of the Internet ultimately lies.
These are the people who need to be convinced that it would be in their interest to allow both affiliated and unaffiliated scholars to surf the net with impunity. Simply appealing to its “low cost” is not a particularly strong argument when the pricing mechanism is still up for grabs and the target audience may not be convinced that so much scholarly communication is really necessary in the first place (or who might want to manipulate the pricing mechanism so as to get schol- ars to communicate in other ways and about other matters).