3. Intellectual Property as the Nexus of Epistemic Validity and Economic Value
3.1. The Challenges Posed by Dividing the Indivisible
3.1.1. The Challenge to Attributions of Validity. Epistemologists have traditionally neglected what economists call the process costsof producing knowledge, i.e., the effects that an agent’s pursuit of a particular line of inquiry now are likely to have on her (and her colleagues’) ability and desire to pursue other lines of inquiry later (cf. Sowell 1987, Chapter 4). In philosophy, this issue normally appears under the general heading of reflexivity, in systems theory as feedback, and in sociology as institutional memory(Woolgar 1988b, Will 1988, Douglas 1987). To appreciate the difference that process costs can make to an epistemologist’s sense of the value of knowl- edge, consider what separates Karl Popper’s (1963) and Paul Feyerabend’s (1975) vision of criticism in the growth of knowledge.
Both place a premium on criticism, but only Feyerabend realizes the long-term difference that criticism would make to our attitudes toward science.
Popper (1970) exemplifies the philosopher’s usual insensitivity to process costs when he calls for a “permanent revolution” in science through the relentless enforcement of the norm of criticizability, or
“falsifiability.” He fails to see that the social status of science is tied to the norm’s selective enforcement. In contrast, Feyerabend’s noto- rious lack of reverence for the methods and products of science is best seen as the result of calculating the process costs of engaging in Popper’s strategy with the relentlessness Popper himself suggests. For if scientists believe that any hypothesis ought to, and probably will, be shown false, then it is reasonable to expect that scientists will develop a generally skeptical attitude toward the value of scientific inquiry itself.
Feyerabend’s viewpoint is instructive because his sensitivity to process costs is divorced from an interest in minimizing these costs, as he believes that the activity of open and mutual criticism is worth pursuing for its own sake to the fullest extent, however institution- ally destabilizing or personally discomforting the consequences may be. This reminds us that an “economistic” approach to knowledge production need not have conservative consequences, since, as a methodological doctrine, economism is committed only to calculat- ing costs, not to minimizing them. Indeed, calculating costs for the purpose of maximizing them is a time-honored Marxist strategy for revolution, especially when Marxists distance their political agenda from that of social democrats by arguing that capitalism’s fall—and, by implication, the revolutionary transformation of society—will be hastened by strategically refusing to enact welfare legislation designed to buffer a volatile economy’s impact on the workforce.
Of course, process consequences may include benefits as well as costs, but these need to be calculated carefully. The ability to stan- dardize an innovation into a routine is perhaps the most obvious business-oriented example. Its cognitive equivalent is the conver- sion of an explicit procedure into what has been variously called a “habit,” “tacit knowledge,” or simply a “reflex.” Common to the pragmatist philosophical tradition and most knowledge management is the view that a potential process benefit of ordinary life experience is economy of thought, so that people need to engage in self- conscious reasoning only when they confront a new problem in the environment. This broadly “adaptive” view of the mind’s workings has been recently dubbed “fast and frugal heuristics” by the experi- mental psychology research team at the Max Planck Institute in Berlin (Gigerenzer et al. 1999).
Yet, despite its Darwinian resonances, the word “adaptive” con- tinues to worry philosophers because of its studied avoidance of meta-rationality, or metacognitionmore generally, as an explanation
for how subjects achieve their goals under the time and resource constraints of realistic decision-making environments. Symptomatic of the problem, as well as a sense of the stakes, is provided by the discussion of the tradeoff between what Gigerenzer et al. (1999, 18) call “generality” and “specificity” of a particular heuristic’s adap- tiveness. They associate these two dimensions with, respectively, the coherence and correspondence theories of truth. At first approxima- tion, heuristics “correspond” to the particular environments that a subject regularly encounters. But these environments cannot be so numerous and diverse that they create computational problems for the subjects; otherwise their adaptiveness would be undermined. In this context, “coherence” refers to the meta-level ability to econo- mize over environments, such that some heuristics are applied in several environments whose first-order differences do not matter at a more abstract level of analysis. In short, people will continue using something that has worked, regardless of context, until it no longer works—and not before then.
However, philosophers do not regard correspondence and coher- ence as theories of truth in this way at all. Whereas correspondence is meant to provide a definitionof truth, coherence offers a criterion of truth. The distinction is not trivial in the present context. A match between word (or thought) and deed (or fact) is not a sufficient mark of truth, since people may respond in a manner that is appropriate to their experience, yet their experience may provide only limited access to a larger reality. Consider Eichmann, the Nazi dispatcher who attempted to absolve himself of guilt for sending people to con- centration camps by claiming he was simply doing the best he could to get the trains to their destinations. No doubt he operated with fast and frugal heuristics, but presumably something was missing at the meta-level. By regarding coherence as their criterion of truth, people are forced to consider whether they have adopted the right stand- point from which to make a decision. This Eichmann did not do.
Thus, when talking the language of “heuristics,” one must always ask whether people have been allowed to alter their decision-making environments in ways that would give them a more comprehensive sense of the issues over which they must pronounce. After all, a decent model of rationality must account for the fact that any deci- sion taken has consequences not only for the task at hand but also for a variety of other environments. One’s rationality, then, should be judged, at least in part, by the ability to anticipatethese environ-
ments. Nevertheless, Eichmann’s refusal to question the background conditions that underwrote his decision-making environment would have received support from Gigerenzer’s simplified approach.
Generally speaking, philosophers regard correspondence to reality as the ultimate goal of any cognitive activity, but coherence with a wide range of experience provides intermittent short-term checks on the pursuit of this goal. By regarding coherence–correspondence in such means–ends terms, philosophers aim to short-circuit the kind of locally adaptive responses that enabled Ptolemaic astronomy to flour- ish without serious questioning for 1500 years and the Nazi regime to endure for a dozen years. Philosophers assume that if a suffici- ently broad range of decision-making environments are considered together, coherence will not be necessarily forthcoming; rather, a reorientation to reality may be needed to accord the divergent expe- riences associated with these environments the epistemic value they deserve. In contrast, Gigerenzer’s group seems to regard coherence as simply facilitating correspondence to environments that subjects treat as given. Where is the space for deliberation over alternative goals against which one must trade off in the decision-making environ- ments in which subjects find themselves? Where is the space for sub- jects to resist the stereotyped decision-making environments that have often led to the victimization of minority groups and, more gen- erally, to an illusory sense that repeated media exposure about a kind of situation places one in an informed state about it?
Having said all that, Gigerenzer’s group would find much support from the KM literature. Indeed, there is a strong anti-intellectualist current in the history of Western thought into which this line of thinking feeds. Championed in the modern era by Jean-Jacques Rousseau, it turns upside-down the conventional wisdom that says we are forced to think a lot (or expend resources on knowledge pro- duction) because we need to solve problems. Rather, we generate unnecessary problems by thinking too much! Much of the KM liter- ature devoted to streamlining management levels and demonizing universities as “dumb organizations” seems to be conceived in this spirit. To be sure, there is an important strain of management thought that supports a more metacognitive perspective. It is epitomized by the expression “double feedback loop,” which is precisely what a more philosophical study of rationality would urge against those enticed by the idea of fast and frugal heuristics (Argyris and Schon 1978).
3.1.2. The Challenge to Attributions of Value. Economists’ linger- ing attachment to the classical conception of knowledge as indivisi- ble is most apparent in their blindness to a particular species of process costs, namely, the costs incurred by agents trying to gain accessto the knowledge production system (cf. Fuller 1988, Chapter 12). I have so far portrayed process costs as borne by the entire knowledge system. Thus, Feyerabend’s relentlessly critical attitude has the long-term consequence of devaluing the scientific enterprise as a whole. However, process costs also affect the relative ability of agents to contribute to the system, for an agent cannot productively contribute to the knowledge system—say, by writing a book that moves its target audience—without first being in a position to consume the products that already circulate in the system. Each new text in circulation redistributes the balance of power, or burden of proof, among subsequent contributors. And so, even before setting pen to paper, an author has intuitions about the sorts of claims that will be easier or harder to defend, from which she will then decide on the burden of textual proof that she is ready to bear (cf. Fuller 1988, Chapters 2, 4). In other words, the author’s paradigmatic moment of soul-searching is really a request to calculate access costs:
How much more reading should I do before I start to write?
By contrast, when economists speak of the initial production of a public good (e.g., writing a book) incurring a much higher fixed cost than its subsequent reproduction (e.g., reading the book), they are catering to the classical epistemological intuition that, once revealed, knowledge is subject to free (or at least relatively inexpensive) access on the part of potential consumers. Yet, writing incurs such a signif- icantly greater cost than reading only if it is presumed that the readers bring to the text the relevant background knowledge—which, as a matter of fact, often does not come cheap (e.g., advanced university degrees) or even well-marked in the text (e.g., obscure allusions and jargon). The situation of the text in this case may be likened to that of a mass-produced toy, which costs little to buy, but which then requires additional costs (or luck!) to be put together. In large measure, the economic mystique of knowledge rests on keeping such access costs hidden, at least from the production side of the economic equation. Thus, although mass-produced books appear incredibly efficient in empowering people to do things that they would other- wise not do, this is only because the cost of making these books
usable to people (“user-friendly”) is left to the distribution side of the equation.
The considerations in the last two paragraphs urge the conclusion that the degree of knowledge-likeness of a particular good is a func- tion of the sharpness of line that is rhetorically drawn between the production and distribution of that good. In short: the sharper the line, and the more occluded the distribution side of the line, the more knowledge-like the good. The metaphysical model for this kind of thinking is the Platonic form, such as the essence of table (assuming, for the sake of argument, that tables have essences), of which par- ticular tables are mere copies or reproductions that contain no more information (and hence no more value) than the prototype, and in fact may contain less, if the particular table turns out not to be very good. In a more psychological vein, we are prone to think that the
“hard work” of invention or discovery comes with the original devel- opment of an idea, and that the subsequent work of transmitting the idea to others is negligible by comparison. Again, all the information is seen as packed into the initial conception, with transmission regarded as mere reproduction, whereby the initial conception is either preserved or lost, depending on the receptiveness of the tar- geted consumers.
Of course, neither philosopher nor economist officially denies that a complete story of knowledge reproduction would involve specify- ing distribution costs which have no obvious analogues in the origi- nal instance of knowledge production. In particular, access costs accrue both to the knowledge producer who must have the means of bringing the good into contact with the relevant consumers (into this category would fall the ability to write to a specific audience), and to the consumer who must have the means (including specialized training) by which to make the most use of the good. However, econ- omists tend to neglect the costs of distribution in these contexts because they talk about knowledge productionin terms of the mate- rial good embodying the knowledge (e.g., a book), whereas they talk about knowledge consumption in terms of the knowledge “con- tained” in the material good (e.g., the ideas). Given this asymmetri- cal treatment, it is not surprising that knowledge has often struck economists as an enigma, since it would seem to incur costs only to its producers but not to its consumers (Bates 1988). This puzzle merely reveals that economists have uncritically borrowed their
analysis of knowledge from Plato-inspired philosophers. But the puzzle can be dissolved, and knowledge can start to look more like other goods, once the distribution of a knowledge good is included as part of the good’s overall production costs. In that case, all knowl- edge is knowledge for someone.
Finally, for didactic purposes, let us reverse the course of our argu- ment and consider what it would mean for cars to be treated as knowledge-like goods. The original prototype of the car would incur most of the total production costs, with each successive vehicle of this type incurring only distribution costs. The overall costs and ben- efits of cars to the economy would remain the same, of course, but they would be divided somewhat differently. To fully appreciate the shift in thinking involved here, imagine car production as a matter of transmitting the essence of a given make of vehicle to several places rather than as reproducing the vehicle several times. In the first case, the bulk of the consumer’s cost would lie in getting access to a vehicle by being at one of the distribution points. Thus, obtaining a driver’s license would absorb the expense that in the second case would be reserved for purchasing a car. The car would simply provide the opportunity for the consumer to manifest her driving skills, and thus would become a much less costly item. This situation would then start to resemble that of the physics book, in which the book itself is relatively inexpensive, but the cost of being able to make full use of its contents (i.e., the cost of a university education in physics) is much higher.