the collective mind. Regardless of the firm’s actual market perfor- mance, an academically informed corporate subjectivity would adopt the standpoint of a player who has yet to win. Thus, one would aim, not for conservative adjustments to the current market paradigm, but to “creatively destroy” the paradigm by introducing new products that force competitors to rethink their market strategy.
The spread of Executive Ph.D. programs will not necessarily rescue academia from KM’s tendency toward bottom-line thinking. After all, the contemporary university does suffer from a fundamental structural weakness. The teaching function, which is organized in terms of rigidly defined departments, works at cross purposes with the research function, which favors largely self-organizing interdisci- plinary teams. There is no doubt that this tension has only worsened in recent years. Research activities—be they in the lab or in the field
—have required more time and space away from the preparation and delivery of courses. It may well be, then, that knowledge man- agers will persuade universities of the cost-effectiveness of allowing research to migrate off their grounds. In time research would become completely outsourced to facilities specially tailored to the needs of major clients. Academic employees would be then left with the peren- nial job of filling classrooms. But by the time this scenario comes to pass, one may hope that there will be enough holders of Executive Ph.D. degrees in public and private sector administration to undo the damage that will have been done.
undermining the value of what workers currently do. For his part, Schumpeter (1950) saw the relationship between innovation and cor- porate survival as akin to that of charisma and routinization, accord- ing to Max Weber. In other words, persistent market instability will eventually lead the state and industry to agree to discourage specu- lative investments in new knowledge. In Veblen’s telling, endless innovation is unfair to the ordinary worker; in Schumpeter’s, it is systemically unsustainable. Nevertheless, the Holy Grail of contem- porary knowledge management is that knowledge growth can be deliberately accelerated.
To be sure, this goal also flies in the face of philosophical wisdom, which claims that, as a matter of principle, scientific progress cannot be predicted or planned. This “principle” is typically presented as a matter of logic, but it is better seen as a matter of ethics. The main issues concern whether (or under what conditions) one ought to accelerate knowledge growth—not whether it can be done at all, since the answer to that question is clearly yes.
Skepticism about the predictability of knowledge growth was born of extreme tendencies exhibited by capitalist and socialist regimes that first became clear in the 1920s. They are associated with so- calledself-fulfillingandself-defeatingprophecies, which occur when- ever a prediction functions as a communication to those whose behavior is being predicted, who can then respond by either enabling or disabling the prediction’s realization. In this way, we can explain both the fluctuation of prices in the open market and the single- mindedness of social policies in an authoritarian regime. Specifically, we can explain, on the one hand, how fortunes were made and lost through speculation, culminating in the stock market crash of 1929 that launched the Great Depression, and on the other, Soviet and Nazi beliefs in the laws of history that motivated atrocities that would have been unthinkable, had their perpetrators not thought they knew which way the future pointed.
The basic argument against the possibility of planned knowledge growth is simply that by definition a genuine scientific discovery cannot be predicted or planned. Those who claim otherwise are therefore dealing in hype. This susceptibility to hype was born of the 1920s, the first great period for stock speculation purportedly based on scientific innovation (Chancellor 1999, Chapter 7). Yet, often the innovations were not forthcoming—though they managed to gener- ate major investments in the interim—or their significance was over-
sold by advertising agencies, whose recent emergence presupposed that products had become too sophisticated to sell themselves. In the latter case, the “innovation” may have merely added a wrinkle or an efficiency to an already existing demand for a good or service.
In addition, the skeptics usually make certain assumptions about the nature of reality:
(A1) Our inquiries initially have a low probability of capturing the aspects of reality that interest us.
(A2) There is a direct correlation between the significance of a sci- entific discovery and the degree of surprise it brings to those who make it. In other words, a quantum leap in knowledge growth should always be counterintuitive to those who have been monitoring the course of inquiry.
Nevertheless, even if assumption A1 is true, it does not follow that assumption A2 is also true. To think that it does is to deny that people collectively learn from particular discoveries so as to improve their ability to make discoveries in the future. Once this point is granted, the main logical objection to accelerating the growth of knowledge is overcome. Speculative investments on prospective revolutionary breakthroughs are thus not irrational. Kuhn (1970, Chapter 11) has observed that scientific revolutions tend to be “invisible” because they are generally recognized as such only after the fact. Of course, it is hard to ignore a controversy while it is happening, but the overall direction of its resolution is known only in retrospect, once official histories of the episode are written which distinguish clear winners and losers. These verdicts track the students, professorships, and other academic resources the various sides managed to accumulate in the process. Moreover, there is typically some mystery as to what the controversy that resulted in the revolution was really about—
beyond simply the distribution of social, political, and economic resources to the next generation of inquirers. Since official histories are calculated to place the winners in the most favorable light, they often ignore the fundamental issues that originally divided the disputants.
In this context, Fuller’s (1988, Chapter 5) “social epistemology”
develops Kuhn’s concept of incommensurability, the non-negotiable differences that lead to the winner-take-all outcomes of scientific revolutions. But unlike Kuhn, I argue that these differences are not
deeply metaphysical but themselves artifacts of an institutionalized communication breakdown between kindred intellectual factions, which in principle can be either maximized or minimized, depending on what we know and want out of our knowledge claims. (For an innovative and systematic use of computer simulations to track the relationship between communication network formations and knowledge distribution, see Carley and Kaufer 1993.) In other words, the “unpredictability” of radical scientific change may be purely a function of our ignorance of how our own knowledge processes work, rather than a sign of some unbridgeable disparity between our state of knowledge and the nature of reality. If we knew our own knowledge processes better, we might not be so surprised by what is and is not possible. In that sense, a high incidence of first-order rev- olutionary breakthroughs could be treated as symptomatic of low control over second-order knowledge processes.
If scientists were regularly compelled to declare shifts in their research commitments—and not only through a self-selecting, partly devolved, semi-secret peer review system—then revolutions could be standardized as periodic election-like events, the outcomes of which would provide regular feedback to remove the surprise and disrup- tiveness that characterize revolutions. It would not simply be left to the vicissitudes of the stock market. Indeed, this sublimation of the revolutionary impulse by the electoral process is one of the great innovations of civic republican democracy from which science as socially organized inquiry might learn. (I explore this civic republi- can alternative in Chapter 4: cf. Fuller 2000a, Chapter 8.)
However, a potential casualty of this institutional innovation may be that science itself no longer appears to exhibit striking innovation.
The reason, of course, is that scientists would be regularly invited to consider altering their inquiries, rather than having to bear the burden of forcing innovation on a system that refuses to change course unless its guiding principle has failed on its own terms. The need for heroics would be thereby eliminated. Indeed, Kuhn himself distinguished scientific from artistic revolutions precisely on these grounds, suggesting that only science—by virtue of its normally monolithic paradigm structure—had proper revolutions that over- turned an existing order. In contrast, so-called artistic revolutions have merely demonstrated the ability of a countercurrent to survive alongside the dominant one (Kuhn 1977, 225–239). Applying this
insight from science, politics, and art to business, the implication is clear: The desire for and recognition of innovation increases as normal market conditions approach monopoly.
A more sophisticated argument against planned knowledge growth is that even if someone thought he or she could predict or plan sci- entific progress, humans are free to determine whether or not it happens. This argument also makes some assumptions of its own:
(B1) The people whose behavior is predicted or planned know that their behavior has been predicted or planned.
(B2) Those people are in a position to prevent or, in some other way, divert what others have predicted or planned.
Note that B1 and B2 are independent claims. For example, I may know that you have predicted something about me, yet I may be in no position to do anything about it. We see this whenever a phy- sician diagnoses someone with a terminal illness. That possibility cuts against self-defeating prophecies. Conversely, I may be able to prevent your prediction from coming true without realizing it, per- haps because my behavior may not be as consistent as you (or I) think, for example, because of unforeseen interactions between genetic dispositions and behavioral patterns. That cuts against self- fulfilling prophecies.
The advancement of social science research has not tended to increase the number of self-fulfilling and self-defeating prophecies.
This is because people exposed to knowledge claims about themselves typically cannot relate the concepts used in those claims to anything they feel they have power over. Often this is because people do not know how to reorganize themselves in the relevant ways, either as collectives (in response to socioeconomic predictions) or as individ- uals (in response to health predictions). But sometimes people simply do not understand the claims. In that respect, when the knowledge claims of social scientists go over the heads of lay people, the latter are behaving rather like the animals and things that are incapable of grasping what natural scientists say about them. Moreover, that is often a necessary (though not sufficient) condition for those claims being true—because if people knew what was being asserted about their situations, they might take steps to either enhance or diminish the truth of those assertions. The classic case of the latter was
Bismarck’s creation of the world’s first social security system for industrial workers specifically in order to preempt Marx’s prediction that the proletarian revolution would originate in Germany.
In short, new knowledge canbe predicted—but only given a strict boundary (hierarchy?) between the knowledgeable and the ignorant.
The difference between socialism and capitalism as policy regimes lies in how the boundary is drawn. If socialism institutionalizes the boundarya priori, capitalism allows it to emerge a posteriori. In this respect, the central planner and market speculator are two sides of the same coin. The former knows at the outset that his knowledge is superior (and has the political might to make it so), whereas the latter takes risks to discover whether his is. That speculator par excellence George Soros (1998) has argued that capitalism tolerates a danger- ous level of uncertainty in the name of greed, gamesmanship, and the pursuit of novelty. In contrast, socialism’s strong suit has been its ability to contain this volatile side of humanity, albeit often at the cost of stifling innovation and inhibiting risk-taking altogether (Schumpeter 1950). Knowledge management, then, is about plan- ning for what might be called “tolerable variation” or “sustainable change” in the firm, state, university, or any other organization.
Fuller (2000c) uses the term reversibility, which Karl Popper (1957) originally adapted from thermodynamics. The basic idea is that, as opposed to the popular economic doctrine of the “path dependence”
of scientific and technological change, “progress” would be measured by the increased receptiveness to changing a course of action once its negative consequences have outweighed its positive ones for suffi- ciently many over a sufficiently long period.
This principle of knowledge policy reversibility, which Popper himself dubbed “negative utilitarianism,” brings us to two explicitly ethical objections to deliberate attempts to accelerate the growth of knowledge.
1. The most efficient ways to learn more about reality—especially social reality—always seem to involve violating the integrity of human beings. The horrors of Nazi science come to mind most readily. But we could equally include, say, traditional religi- ous objections to opening corpses for medical and scientific purposes.
2. Any new discoveries made by a knowledge acceleration scheme would benefit those who made (or funded) the discoveries, at
the expense of others, which may in turn adversely affect power relations in society.
There are substantial historical precedents for both objections, but they do not imply that the knowledge growth cannot, or even should not, be accelerated. All they imply is that the appropriate background social conditions must first be in place, so as to prevent the occur- rence of these adverse ethical consequences. Just as the Great Depres- sion led to the establishment of regulatory bodies (e.g., the U.S.
Securities and Exchange Commission) to monitor stock market activ- ities, something similar may be needed once knowledge acceleration becomes a financially viable practice.
So far I have only cleared a space for programs designed to accel- erate the growth of knowledge. I have yet to justify the desirability of such programs. In terms of the Realpolitik of capitalism, such a program would be no more radical than Frederick Winslow Taylor’s original program in “scientific management.” Taylor assumed that normal performance is not necessarily the best possible and that those who do the work are not necessarily in the best position to judge how to improve its performance. The point now would be to trans- fer these insights from the industrial to the scientific workplace (cf.
Fuller 1993b, 307–311). Research in the sociology and social psy- chology of science generally shows that scientific innovation is mostly determined by the organization of the scientific work environment (Shadish and Fuller 1994). This is because even individuals with sci- entific training are subject to the same sorts of biases and incapaci- ties as ordinary reasoners (Faust 1985, Arkes and Hammond 1986).
What has been traditionally called “the scientific method” is nothing more than an abstract characterization of how people and things need to be arranged in order for the whole to be greater than the sum of its parts (Fuller 1993a, Fuller 1994a).
Presumably, then, the rate of scientific knowledge production can be increased by applying this method to science itself. But in order to ensure the success of such a knowledge acceleration program, potential managers and investors should bear in mind three things:
1. The confidence and trust of the scientific community need to be secured. Scientists must come to see it as in their own interest to cooperate with knowledge acceleration schemes. This may be the main negative lesson of Taylorism, which failed because
it appealed to management by going over the heads of the very workers who would be most directly affected by its scientific approach. A similar skepticism and resistance explains why scientists have been relatively uncooperative with psychol- ogists and sociologists interested in studying their patterns of work and reasoning. To date, most studies with an explicit interest in improving scientific performance have been done on either computer simulations (Fuller 1995) or so-called “ana- logue populations” (e.g., students are given problems that resemble scientific ones: cf. Tweney et al. 1981). There have also been many insightful comparative studies of knowledge production based on ethnographic and historical methods, but these rarely draw any lessons for how knowledge production may be improved (e.g., Pickering 1992). They tend to assume that what is, is good enough.
2. As the pursuit of scientific knowledge has become more expen- sive and its prospect of improving wealth production more evident, scientists are being increasingly pressured—by both the public and private sectors—to improve their rate of return on investment. Thus, it is only a matter of time before the scien- tific community is forced to take seriously the issue of acceler- ating knowledge growth in a way they have not had to in the past. This undermines the objection raised in point 1.
3. The ultimate significance of a scientific innovation cannot be reduced to the competitive advantage it brings one in the marketplace—at least in the relatively short-term sense in which one normally speaks of “competitive advantage.” The compet- itive advantage one gains from new knowledge largely depends on one’s ability to create a demand for it (Drucker 1954). It probably has more to do with one’s understanding of the market than anything truly revolutionary in the innovation itself. In this respect, the possibility of accelerating knowledge growth is no less a challenge to the business community than to the scientific community.
Making
Knowledge Matter:
Philosophy, Economics,
and Law
1. The Basic Philosophical Obstacle to Knowledge
Management 58
1.1. The Philosophical Problem of Knowledge and Its
Problems 61
2. The Creation of Knowledge Markets: The Idea of an
Epistemic Exchange Rate 67
57
2.1. An Offer No Scientist Can Refuse: Why Scientists
Share 72
2.2. Materializing the Marketplace of Ideas: Is Possessing
Knowledge Like Possessing Money? 75
2.2.1. Knowledge’s Likeness to Money 76 2.2.2. Knowledge’s Unlikeness to Money 79 3. Intellectual Property as the Nexus of Epistemic Validity
and Economic Value 81
3.1. The Challenges Posed by Dividing the Indivisible 82 3.1.1. The Challenge to Attributions of Validity 82 3.1.2. The Challenge to Attributions of Value 86 3.2. The Challenges Posed by Inventing the Discovered 88 3.2.1. The Challenge to Attributions of Validity 88 3.2.2. The Challenge to Attributions of Value 89 4. Interlude: Is the Knowledge Market Saturated or
Depressed?: Do We Know Too Much or Too Little? 93 5. Recapitulation: From Disciplines and Professions to
Intellectual Property Law 96
6. The Legal Epistemology of Intellectual Property 98 6.1. Two Strategies for Studying the Proprietary
Grounds of Knowledge 105
7. Epilogue: Alienating Knowledge from the Knower and the
Commodification of Expertise 106