A long- standing issue in philosophy of science concerns the appropriate roles of values in science— in particular, the roles of social, political, and ethical values. These sometimes are referred to as non- epistemic or contextual values (Longino, 1990). While it is widely agreed that it is appropriate for contextual values to influence both the selection of research pri- orities as well as decisions about which technological applications of scientific knowledge to pursue, there is less agreement over the appropriate place of these values in the assessment of scientific hypotheses. A traditional view is that the influence of contextual values here
Environmental Science: Empirical Claims in Environmental Ethics 33 ought to be minimized (e.g., Jeffrey, 1956; Lacey, 1999); this is a central commitment of what is known as the value- free ideal for science.
Recent philosophical discussion of the value- free ideal often begins from Rudner’s (1953) argument that contextual value judgments are routinely involved in the assessment of scien- tific hypotheses: evidence never establishes a scientific hypothesis with absolute certainty;
the fact that stronger evidence is demanded before accepting some hypotheses than others reflects the value judgment that the consequences of error in those cases are judged to be particularly bad. For example, stronger evidence is demanded before accepting that a new drug is an effective treatment for malaria than before accepting that the average life span of ferrets is more than seven and a half years, because the consequences of erroneously accept- ing the former can be expected to be worse than the consequences of erroneously accept- ing the latter.4 Thus, the argument goes, contextual values often play a role when deciding whether to accept or reject hypotheses in the face of some risk of error, known as inductive risk (Hempel, 1965; Douglas, 2000).
Various methodological choices in science can affect the balance of inductive risk. As Rudner’s analysis suggests, requiring a higher statistical significance level before accept- ing a hypothesis (e.g., 99% rather than 90%) increases the risk of erroneously rejecting a true hypothesis, while requiring a lower significance level increases the risk of erroneously accepting a false one. Choices made in the course of collecting and analyzing data can also influence the balance of inductive risk (see also Shrader- Frechette and McCoy, 1993). An example offered by Douglas (2000) is instructive: suppose a scientist encounters ambigu- ous tissue specimens in a study investigating whether a chemical is carcinogenic; classifying these ambiguous cases as cancerous will increase the risk that the chemical is erroneously considered carcinogenic, while classifying them as benign will increase the risk that the chemical is erroneously considered safe.
Douglas (2009) argues that when facing uncertain methodological choices like these, which affect the balance of inductive risk, scientists’ decisions ought to be informed by their judg- ments of how bad the consequences of each type of error would be. Their obligation to do so stems from the general moral responsibilities that agents have to consider the consequences of their actions (Douglas, 2009, ch. 4). The scientist in the example might decide to classify the ambiguous tissue specimens as carcinogenic, because he judges that it would be worse to erro- neously classify the chemical as safe than to erroneously classify it as carcinogenic.5 Of course, that such value- laden methodological choices were made should be reported by the scientist along with the conclusions reached (Douglas, 2009, ch. 8; see also Kloprogge et al., 2011; Elliott and Resnik, 2014). Moreover, the influence of contextual values should be limited to situations of genuine methodological uncertainty; values should not be invoked as direct reasons for accepting hypotheses. But the value- free ideal should be rejected, according to Douglas, inso- far as it overlooks some of the moral responsibilities of scientists.
A standard line of reply to arguments from inductive risk begins with Jeffrey (1956).
Taking a Bayesian point of view, he denies that scientists must accept or reject hypotheses;
their job is to assign probabilities in light of the available information, and this does not require contextual value judgments. (It simply requires the application of Bayes’ Theorem.) It is then up to policymakers and other decision- makers to decide, in light of their social, political, and ethical values, whether the probabilities are sufficient to warrant various courses of action. In other words, from a classic decision- theoretic perspective, scientists provide the probabilities and decision- makers provide the utilities.
34 Wendy s. Parker
A more recent response in a somewhat similar vein is offered by Betz (2013). He argues that rather than appealing to contextual values to make a choice in the face of method- ological uncertainty, scientists can acknowledge that the choice is uncertain and attempt to determine the implications of this for the matter under investigation. For instance, different methodological options can be explored. In the example above, this might mean calculating relative cancer risk first under the assumption that ambiguous cases are benign and then under the assumption that they are cancerous, thus arriving at a range of relative risk values bounded by the two results. The ensemble modeling studies of future climate change men- tioned previously aim to explore the implications of methodological uncertainty in a similar way: the different climate models used to project future temperature change are intended to reflect different reasonable choices in model construction. Betz argues that, in light of what is found by exploring different methodological choices in this way, scientists can offer hedged conclusions to decision- makers. These conclusions might be probabilistic, as Jeffrey assumed, or, if uncertainty is deeper, they might take other appropriate forms (e.g., outcome X is plausible). Betz thus concludes that contextual value judgments can be avoided in the assessment of hypotheses.6
Challengers of the value- free ideal find this sort of response insufficient. They contend that, even if scientists report uncertainties using probabilities or in some other way, those uncertainty estimates will themselves be somewhat uncertain, and so contextual value con- siderations will need to resurface as scientists consider the consequences of error or inaccu- racy in their uncertainty estimates (Douglas, 2009; see also Rudner, 1953). This second- order uncertainty arises in part because the range of reasonable methodological choices is itself somewhat uncertain: it is not perfectly clear where to draw the line between ambiguous and unambiguous tissue specimens, nor between reasonable and unreasonable assumptions that might be included in climate models.7 In addition, practical realities, such as limited com- puting power, sometimes limit the extent to which scientists can explore the implications of whatever set of reasonable methodological options is identified.
Betz in turn counters that scientists can simply weaken their conclusions until they are, for all practical purposes, beyond any reasonable doubt— that is, until there is negligible induc- tive risk (2013: 218). For example, climate scientists may be quite unsure whether a global temperature increase of more than 2°C should be considered “likely” (i.e., ≥0.66 likelihood) or merely “more likely than not” (i.e., ≥0.50 likelihood) under a particular emission scenario, but they might judge it beyond any reasonable doubt that such a temperature increase can- not yet be ruled out. By simply reporting the latter conclusion, they can avoid any significant inductive risk. But while Betz is correct that weak claims of this sort are sometimes of inter- est to policymakers, in many cases they may want something more. For instance, they might ask scientists to give them a best estimate (plus 95% confidence interval) of the number of deaths per year that result from a given air pollutant. Or they might require scientists to express their confidence in a hypothesis in one of a limited set of ways— for example, “high,”
“medium,” “low” (see also Steele, 2012). In these cases, significant second- order uncertainty, and thus inductive risk, can remain. The defender of the value- free ideal presumably will advise that scientists probe and report this second- order uncertainty as well. Is this then suf- ficient? The debate surrounding inductive risk is still unfolding.
The inductive risk debate is just one thread in a wide- ranging discourse concerning the influence of contextual values in science (see e.g. Intemann, 2005; Douglas, 2011; Elliott, 2011; Brown, 2013). Other important threads focus on the ways contextual value judgments
Environmental Science: Empirical Claims in Environmental Ethics 35 are encoded in the very terms used in science, for example, in ecosystem “health” or ecosys- tem “services” (see also Dupré, 2007), and on other subtle ways in which contextual values influence scientific reasoning, even when this is not intended. For instance, Longino (1990) argues that contextual values sometimes shape scientists’ background assumptions, which in turn influence the range of hypotheses that they consider plausible and the extent to which they understand data to provide evidence for a hypothesis (see also Sarewitz, 2004). In this way, the conclusions reached by a scientist may be subtly biased by his or her values. For example, suppose that a scientist values helping people and, in part as a consequence of this, has as an implicit background assumption that most social ills can be mitigated substantially with interventions to the social environment; when analyzing a particular social problem, this scientist may not even consider the hypothesis that a genetic or other non- social cause may be a significant factor.
Longino thus recommends that objectivity be understood as a community- level feature of science: the practices of a scientific community are objective to the extent that they not only allow for criticism of background assumptions, data, and methods from a range of perspec- tives but also are responsive to such criticism (1990: 76– 80; Longino, 2002: 128– 135).8 This is not to suggest that individual scientists should not make an honest effort to find out what the world is like (see also Douglas, 2009, ch. 6). The point, rather, is that critical dialogue with others— whether in the published literature or in more informal venues— brings additional opportunities for uncovering and transcending biases at the individual level.
This is a salutary reminder in the face of entrenched, opposing views on environmental issues. Of course, sometimes uncertainty and controversy are manufactured deliberately, with the aim of forestalling undesirable policy action.9 Likewise, sometimes evidence is deliberately presented in a selective way in order to give a misleading impression of what scientific investigation has uncovered. But other times entrenched disagreement may reflect an unwillingness to question preferred background assumptions or to seriously engage with individuals who, despite some shared standards, nevertheless interpret the available data dif- ferently. When this unwillingness is a persistent feature of a community, it too can be under- stood as a failure of objectivity.