• Tidak ada hasil yang ditemukan

their observed covariation is not the result of an accidental connection with some associated (third) variable.

For example, just about everyone knows someone who claims to predict the weather—usually rain—based on “a feeling in their bones.” Interestingly enough, these predictions may turn out to be correct more often than not. Th us, one would observe a covariation between the prediction and the weather. However, a moment’s refl ection will show that there is no inherent link between these two phenomena.

Instead, changes in pressure and moisture in the atmosphere cause both changes in the state of the human body (such as a feeling in the bones) and changes in the weather (such as rain). Th e original covariation between “feeling in the bones” and

“rain” is observed not because the variables are causally related, but because both of them are associated with a third factor, changes in the atmosphere. Hence, the orig- inal relationship is spurious: It can be explained by the eff ects of a third variable.

Although it may be relatively easy to think of examples of likely nonspurious relationships (fertilizer use and crop yield, exposure to sunlight and tan skin), it is far more diffi cult to demonstrate that an observed covariation is nonspurious. Es- tablishing a relationship as nonspurious is an inductive process that requires that the researcher take into account all possible sources of an observed covariation between two variables. If, after the eff ects of all third factors are eliminated, the original covariation remains, then the relationship is nonspurious.

Because the condition requiring the elimination of all possible third vari- ables is based on logical grounds (one would be incorrect in claiming a causal relationship between two variables if it could be explained by the eff ects of any other factor), nonspuriousness cannot be proved by data analysis alone. However, to the extent that a researcher explicitly takes into account possible sources of an observed covariation between two variables and is able to eliminate them as alternative explanations for the covariation, the validity of the causal inference is enhanced. In other words, the greater the number of relevant third variables con- sidered and eliminated, the greater is the confi dence that the original relation- ship is causal. For example, the researcher who fi nds that a covariation between interest in public aff airs and performance in a MPA program persists after the eff ects of prior undergraduate education, occupation, amount of time available for study, and so on have been taken into account would have more confi dence in the causal inference than if only one of these third factors had been considered.

Th e fi nal criterion for establishing a relationship as causal is theory. As dis- cussed earlier in the chapter, theories are meant to explain important phenomena.

To establish a causal relationship, not only must the conditions of time order, covariation, and nonspuriousness be satisfi ed, but also a theoretical or substan- tive justifi cation or explanation for the relationship must be provided. Th eory interprets the observed covariation; it addresses the issue of how and why the relationship occurs. In one sense, theory serves as an additional check on non- spuriousness. It lends further support to the argument that the link between two phenomena is inherent rather than the artifact of an associated third factor.

Many of the techniques used to assess causal relationships can be performed on computers using statistical software packages. Although computers can

quickly generate sophisticated statistical analyses, they do not have the capability to judge whether results are plausible or meaningful in real-world settings. Sta- tistical software packages see numbers for what they are—numbers. Th ey cannot tell you whether a relationship between two variables is actually plausible. Th is is why theory is such an important component of causality.

Parts IV through VII of this book deal with statistical techniques used to test hypotheses. As you learn about these techniques, you should remember that even if a relationship is statistically signifi cant, that does not necessarily mean it is sig- nifi cant in the sense of being meaningful or important. Th e job of a data analyst is to explain the substantive meaning of statistical relationships; computers alone are not capable of making such judgments.

As this discussion has illustrated, the criteria necessary for proof of a causal relationship are demanding. It is essential to balance this view with the idea that the determination of causation is not a yes-no question but a matter of degree. Satisfac- tion of each criterion lends further support to the causal inference. Additionally, for each criterion, there are varying levels of substantiation. Confi dence in the time order of phenomena is variable, observed covariations can assume a range of magni- tude from small to large, demonstrating nonspuriousness is nearly always problem- atic, and the validity of a given theory is usually the subject of debate. Evidence for a causal relationship is based on the extent to which all of these criteria are satisfi ed.

In this context, two fi nal points merit consideration. First, social scientists accept the concept of multiple causation. It is widely acknowledged that an event or phenomenon may have several causes, all of them contributing to the result.

For example, explanations of why a person continues to volunteer for an agency are complicated, based on a variety of organizational, social, and personal factors.

Consequently, social researchers rarely speak of the cause; instead, they seek the causes, or determinants, of a phenomenon. Th us, it can and does happen that an observed covariation between an independent variable and a dependent variable turns out to be neither totally nonspurious nor totally spurious, but partially spu- rious. Data analysis frequently suggests that both the independent variable and a third variable (as well as other explanatory variables) are potential causes of the dependent variable. Th e chapters on multivariate statistical techniques (Chapters 17–23) explain how to assess such eff ects.

Second, in developing and evaluating causal explanations, it can be extremely useful to construct an arrow diagram like the one in Figure 3.1. Depicting graphi- cally a system of proposed causal relationships helps to uncover possible interrela- tionships among expected causes, sources of spuriousness, and omitted variables whose eff ects should not be ignored. It is also a quick and easy procedure. For all these reasons, the arrow diagram is a highly recommended technique, no matter how simple or complicated the model may be.

In normal conversation and managerial practice, people speak easily, some- times too easily, about causation. Th ese statements are interesting as conjectures or hypotheses about what leads to (“causes”) what. If you intend to use or apply these statements in public and nonprofi t management, be sure to evaluate them against the four criteria discussed above. Much of this book teaches you how to do so.