• Tidak ada hasil yang ditemukan

4 Concluding Remarks

Dalam dokumen The Oxford Handbook of Environmental Ethics (Halaman 54-59)

Environmental Science: Empirical Claims in Environmental Ethics 35 are encoded in the very terms used in science, for example, in ecosystem “health” or ecosys- tem “services” (see also Dupré, 2007), and on other subtle ways in which contextual values influence scientific reasoning, even when this is not intended. For instance, Longino (1990) argues that contextual values sometimes shape scientists’ background assumptions, which in turn influence the range of hypotheses that they consider plausible and the extent to which they understand data to provide evidence for a hypothesis (see also Sarewitz, 2004). In this way, the conclusions reached by a scientist may be subtly biased by his or her values. For example, suppose that a scientist values helping people and, in part as a consequence of this, has as an implicit background assumption that most social ills can be mitigated substantially with interventions to the social environment; when analyzing a particular social problem, this scientist may not even consider the hypothesis that a genetic or other non- social cause may be a significant factor.

Longino thus recommends that objectivity be understood as a community- level feature of science: the practices of a scientific community are objective to the extent that they not only allow for criticism of background assumptions, data, and methods from a range of perspec- tives but also are responsive to such criticism (1990: 76– 80; Longino, 2002: 128– 135).8 This is not to suggest that individual scientists should not make an honest effort to find out what the world is like (see also Douglas, 2009, ch. 6). The point, rather, is that critical dialogue with others— whether in the published literature or in more informal venues— brings additional opportunities for uncovering and transcending biases at the individual level.

This is a salutary reminder in the face of entrenched, opposing views on environmental issues. Of course, sometimes uncertainty and controversy are manufactured deliberately, with the aim of forestalling undesirable policy action.9 Likewise, sometimes evidence is deliberately presented in a selective way in order to give a misleading impression of what scientific investigation has uncovered. But other times entrenched disagreement may reflect an unwillingness to question preferred background assumptions or to seriously engage with individuals who, despite some shared standards, nevertheless interpret the available data dif- ferently. When this unwillingness is a persistent feature of a community, it too can be under- stood as a failure of objectivity.

36 Wendy s. Parker

General scepticism about results obtained from models is unwarranted; though a model may differ from its target in salient ways, it may still be adequate for the purposes for which it is used. Indeed, there can be good reason to believe that a model is adequate for a particu- lar purpose, even when some of its results fail to match observations of the target system.

When there is uncertainty about how to build an adequate model, investigative strategies involving multiple models can be useful, but interpreting results can be tricky. For instance, just as consensus among scientists takes on special epistemic significance only under certain circumstances, so does agreement among modeling results.

Debate continues over the appropriate roles of social, political, and ethical values in sci- ence. Arguments from inductive risk see an actual and/ or desirable role for these values when faced with uncertain methodological choices. Opponents contend that appeal to val- ues here can and/ or should be avoided. More broadly, it has been argued that contextual values shape background assumptions in subtle ways that influence the way evidence is evaluated. This has led to the proposal that objectivity be understood as a community- level feature of science, which is achieved to the extent that a community allows and is responsive to criticism of background assumptions, data, and methods from a range of perspectives.

Notes

1. According to Bayes’ Theorem: p H e( | )=p H x p e H p e( ) ( | ) ( )/ , where p(H|e) is the prob- ability that the agent should assign to H if e is obtained, i.e. the updated probability for H;

p(H) is the probability that the agent assigns to H before obtaining e; p(e|H) is the prob- ability of obtaining e if H is true; and p(e) is the probability of obtaining e whether H is true or false, i.e. the expectedness of e. This simple formulation of Bayes’ Theorem leaves implicit the role of background information, b; including it results in a slightly different formulation: p H e b( | & )=p H b x p e H b p e b( | ) ( | & /) ( | ).

2. Such convergence is not guaranteed. For instance, if scientists often disagree on whether p e H( | )>p e( ) for new pieces of evidence, their probabilities might not converge.

3. Situations in which probabilities cannot be assigned to hypotheses are also known as situ- ations of Knightian uncertainty (see Knight, 1921).

4. Brysse et al. (2012) suggest that in fact the scientific community has a tendency to “err on the side of least drama”— to “demand greater levels of evidence in support of surprising, dramatic, or alarming conclusions than in support of conclusions that are less surprising, less alarming, or more consistent with the scientific status quo” (pp. 327– 328).

5. It is interesting that the EPA guidelines for carcinogen risk assessment (2005), mentioned earlier, provide a number of ‘default options’ that can be employed in situations of meth- odological uncertainty; these options are said to be “consistent with EPA’s mission to pro- tect human health while adhering to the tenets of sound science” (pp. 1– 7).

6. Note that Betz does not argue that the value- free ideal should be adopted; he simply argues that contextual value judgments can be avoided in the face of methodological uncertainty.

7. Biddle and Winsberg (2010) argue that contextual values also influence which climate model variables are prioritized for accurate simulation and that the effects of this prioriti- zation cannot be removed from probabilistic estimates of uncertainty about future climate change (see also Winsberg, 2012; Parker, 2014).

8. Note that in such a community, consensus on policy- relevant scientific questions may indeed be “hard won” (see under Evidence, Uncertainty, and Consensus).

Environmental Science: Empirical Claims in Environmental Ethics 37 9. For examples in public health (e.g., tobacco, asbestos) and climate change, see Michaels,

2006; Edwards, 2010, ch. 15; Oreskes and Conway, 2011.

References

Beatty, J., and Moore, A. (2010). “Should We Aim for Consensus?” Episteme 7(3): 198– 214.

Beck, M. B. (2002). “Model Evaluation and Performance.” In Encyclopedia of Environmetrics, edited by A. H. El- Shaarawi and W. W. Piegorsch, 1275– 1279. Chichester: Wiley.

Betz, G. (2010). “What’s the Worst Case? The Methodology of Possibilistic Prediction.” Analyse und Kritik 31(1): 87– 106.

Betz, G. (2013). “In Defense of the Value- Free Ideal.” European Journal for Philosophy of Science 3(2): 207– 220.

Betz, G. (2015). “Are Climate Models Credible Worlds? Prospects and Limitations of Possibilistic Climate Prediction.” European Journal for Philosophy of Science 5(2): 191– 215.

doi: 0.1007/ s13194- 015- 0108- y.

Biddle, J. and Winsberg, E. (2010). “Value Judgments and the Estimation of Uncertainty in Climate Modeling.” In New Waves in the Philosophy of Science, edited by P. D. Magnus and J.

Busch, 172– 197. New York: Palgrave MacMillan.

Brown, M. (2013). “Values in Science beyond Underdetermination and Inductive Risk.”

Philosophy of Science 80(5): 829– 839.

Brysse, K., Oreskes, N., O’Reilly, J., and Oppenheimer, M. (2012). “Climate Change Prediction: Erring on the Side of Least Drama?” Global Environmental Change 23(1): 327– 337.

Caswell, H. (1976). “The Validation Problem” In Systems Analysis and Simulation in Ecology, vol. 4, edited by B. C. Patter, 313– 325. New York: Academic Press.

Collins, M., et  al. (2013). “Long- Term Climate Change:  Projections, Commitments and Irreversibility.” In Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, edited by T. F. Stocker. New York: Cambridge University Press.

Douglas, H. (2000). “Inductive Risk and Values in Science.” Philosophy of Science 67: 559– 579.

Douglas, H. (2009). Science, Policy, and the Value- Free Ideal. Pittsburgh:  Pittsburgh University Press.

Douglas, H. (2011). “Facts, Values, and Objectivity.” In The SAGE Handbook of Philosophy of Social Science, edited by I. Jarvie and J. Zamora- Bonilla, 513– 529. London:  SAGE Publications.

Douglas, H. (2012). “Weighing Complex Evidence in a Democratic Society.” Kennedy Institute of Ethics Journal 22(2): 139– 162.

Dupré, J. (2007). “Fact and Value.” In Value- Free Science? Ideals and Illusions, edited by H.

Kincaid, J. Dupré, and A. Wylie, 27– 41. New York: Oxford University Press.

Edwards, P. (2010). A Vast Machine: Computer Models, Climate Data and Politics of Global Warming. Boston: MIT Press.

Elliott, K. C. (2011). Is a Little Pollution Good for You? Incorporating Societal Values in Environmental Research. New York: Oxford University Press.

Elliott, K. C. and Resnik, D. B. (2014). “Science, Policy and the Transparency of Values.”

Environmental Health Perspectives, http:// dx.doi.org/ 10.1289/ ehp.1408107.

Giere, R. (2004). “How Models Are Used to Represent Reality.” Philosophy of Science 71: 742– 752.

38 Wendy s. Parker

Hempel, C. G. (1965). “Science and Human Values.” In Aspects of Scientific Explanation and other Essays in the Philosophy of Science, edited by C. G. Hempel, 81– 96. New York: The Free Press.

Howson, C., and Urbach, P. (1993). Scientific Reasoning: The Bayesian Approach, 2nd ed. Chicago:

Open Court.

Intemann, K. (2005). “Feminism, Underdetermination, and Values in Science.” Philosophy of Science 72(5): 1001– 1012.

Jeffrey, R. (1956). “Valuation and Acceptance of Scientific Hypotheses.” Philosophy of Science 22: 237– 246.

Kandlikar, M., Risbey, J., and Dessai, S. (2005). “Representing and Communicating Deep Uncertainty in Climate- Change Assessments.” Comptes Rendus Geoscience 337: 443– 455.

Katzav, J. (2014). “The Epistemology of Climate Models and Some of Its Implications for Climate Science and the Philosophy of Science.” Studies in History and Philosophy of Modern Physics 46: 228– 238.

Kloprogge, P., van der Sluijs, J. P., and Petersen, A. C. (2011). “A Method for the Analysis of Assumptions in Model- Based Environmental Assessments.” Environmental Modelling and Software 26(3): 289– 301.

Knight, F. (1921). Risk, Uncertainty and Profit. Boston: Houghton Mifflin.

Lacey, H. (1999). Is Science Value Free? Values and Scientific Understanding. New  York:

Routledge.

Levins, R. (1966). “The Strategy of Model Building in Population Biology.” American Scientist 54(4): 421– 431.

Lloyd, E. (2010). “Confirmation and Robustness of Climate Models.” Philosophy of Science 77: 971– 984.

Longino, H. (1990). Science as Social Knowledge. Princeton: Princeton University Press.

Longino, H. (2002). The Fate of Knowledge (Princeton: Princeton University Press).

Mastrandrea, M. D., et al. (2010). Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties. Intergovernmental Panel on Climate Change (IPCC). Available at <http:// www.ipcc.ch>.

Mayo, D. (1996). Error and the Growth of Experimental Knowledge. Chicago: University of Chicago Press.

Michaels, D. (2006). “Manufactured Uncertainty.” Annals of the New York Academy of Sciences 1076: 149– 162.

Morgan, M. S., and Morrison, M. (1999). Models as Mediators: Perspectives on Natural and Social Science. New York: Cambridge University Press.

National Research Council (NRC) (2007). Models in Environmental Regulatory Decision Making. Washington, DC: The National Academies Press.

Odenbaugh, J. (2005). “Idealized, Inaccurate but Successful:  A  Pragmatic Approach to Evaluating Models in Theoretical Biology.” Biology and Philosophy 20: 231– 255.

Odenbaugh, J. (2010). “Philosophy of the Environmental Sciences.” In New Waves in Philosophy of Science, edited by P. D. Magnus and J. Busch, 155– 171. New York: Palgrave MacMillan.

Odenbaugh, J. (2012). “Consensus, Climate, and Contrarians.” In The Environment: Philosophy, Science, and Ethics, edited by W. P. Kabasenche, M. O’Rourke, and M. H. Slater, 137– 150.

Boston: MIT Press.

Oreskes, N., Belitz, K., and Shrader- Frechette, K. (1994). “Verification, Validation and Confirmation of Numerical Models in the Earth Sciences.” Science 263(5147): 641– 646.

Environmental Science: Empirical Claims in Environmental Ethics 39 Oreskes, N. (2004). “Science and Public Policy: What’s Proof Got to Do with It?” Environmental

Science and Policy 7: 369– 383.

Oreskes, N. (2007). “The Scientific Consensus on Climate Change:  How Do We Know We’re Not Wrong?” In Climate Change:  What It Means for Us, Our Children, and Our Grandchildren, edited by J. F. C. DiMento and P. Doughman, 65– 99. Boston: MIT Press.

Oreskes, N. and Conway, E. (2011). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury Press.

Parker, W. S. (2011a). “Scientific Models and Adequacy- for- Purpose.” Modern Schoolman: A Quarterly Journal of Philosophy 87(3, 4), 285– 293.

Parker, W. S. (2011b). “When Climate Models Agree:  The Significance of Robust Model Predictions.” Philosophy of Science 78(4): 579– 600.

Parker, W. S. (2013). “Ensemble Modeling, Uncertainty and Robust Predictions.” Wiley Interdisciplinary Reviews (WIREs) Climate Change 4: 213– 223.

Parker, W. S. (2014). “Values and Uncertainties in Climate Prediction, Revisited.” Studies in History and Philosophy of Science 46: 24– 30.

Pirtle, Z., Meyer, R., and Hamilton, A. (2010). “What Does It Mean when Climate Models Agree? A  Case for Assessing Independence among General Circulation Models.”

Environmental Science and Policy 13: 351– 361.

Ranalli, B. (2012). “ ‘Climate Science, Character and the ‘Hard Won’ Consensus.” Kennedy Institute of Ethics Journal 22(2): 183– 210.

Rudner, R. (1953). “The Scientist qua Scientist Makes Value Judgments.” Philosophy of Science 20: 1– 6.

Sarewitz, D. (2004). “How Science Makes Environmental Controversies Worse.” Environmental Science and Policy 7: 385– 403.

Shrader- Frechette, K. S., and McCoy, E. D. (1993). Method in Ecology:  Strategies for Conservation. Cambridge: Cambridge University Press.

Stainforth, D. A. et al. (2007). “Confidence, Uncertainty and Decision- Support Relevance in Climate Predictions.” Philosophical Transactions of the Royal Society A 365: 2145– 2161.

Steel, D. (2007). Across the Boundaries:  Extrapolation in Biology and Social Science.

New York: Oxford University Press.

Steele, K. (2012). “The Scientist qua Policy Advisor Makes Value Judgments.” Philosophy of Science 79: 893– 904.

Stegenga, J. (2013). “An Impossibility Theorem for Amalgamating Evidence.” Synthese 190(12): 2391– 2411.

US EPA (2005). “Guidelines for Carcinogen Risk Assessment.” Washington, DC:  US Environmental Protection Agency, EPA/ 630/ P- 03/ 001F.

van der Sluijs, J. et al. (2008). “Exploring the Quality of Evidence for Complex and Contested Policy Decisions.” Environmental Research Letters 3:  024008 doi:10.1088/ 1748- 9326/ 3/ 2/

024008.

Weisberg, M. (2006). “Robustness Analysis.” Philosophy of Science 73(5): 730– 742.

Weisberg, M. (2013). Simulation and Similarity:  Using Models to Understand the World.

New York: Oxford University Press.

Winsberg, E. (2012). “Values and Uncertainties in the Predictions of Global Climate Models.”

Kennedy Institute of Ethics Journal 22(2): 111– 137.

Chapter 4

Markets, Ethics, and Environment

John O’Neill

Dalam dokumen The Oxford Handbook of Environmental Ethics (Halaman 54-59)