• Tidak ada hasil yang ditemukan

Constructing Causal Explanations

To construct explanations of events or relationships, concepts must be defi ned. Two types of defi nitions are necessary for empirical or data-based re- search. Th e fi rst is called a nominal or conceptual defi nition. It is the standard dictionary defi nition that defi nes the concept in terms of other concepts. For example, patriotism may be defi ned as love for one’s country; intelligence may be defi ned as mental capacity; occupation may be defi ned as the type of work that an individual primarily performs; volunteer may be defi ned as donating one’s time without pay.

In social research, nominal defi nitions must satisfy a set of conditions. Con- cepts should be defi ned as clearly and precisely as possible. A concept must not be defi ned in terms of itself. For example, to defi ne happiness as the state of being happy is meaningless. Also, the defi nition should say what the concept is, rather than what it is not. For example, to defi ne duress as the absence of freedom does not distinguish it from several other concepts (coercion, imprisonment, com- pulsion, and so on). Inevitably many other concepts will fi t the description of what the concept is not. Furthermore, unless there is good reason, the defi nition should not constitute a marked departure from what has generally been accepted in the past. A good reason might be that you feel that previous defi nitions have misled research in the area. Whereas nominal defi nitions are arbitrary—they are neither right nor wrong—in order to advance knowledge, researchers attempt to make defi nitions as realistic and sensible as possible. Although it could not be proved incorrect to defi ne a cat as a dog, it would be counterproductive.

Th e second type of defi nition is called an operational defi nition (sometimes called a working defi nition). Th e operational defi nition translates the nominal defi nition into a form in which the concept can be measured empirically—that is, with data. As discussed in Chapter 2, the process of operationalizing a concept results in indicators, or variables designed to measure the concept.

By converting abstract ideas (concepts) into a form in which the presence or absence, or the degree of presence or absence, of a concept can be measured for every individual or case in a sample of data, operational defi nitions play a vital role in research. Th ey allow researchers to assess the extent of empirical support for their theoretical ideas. For example, civic involvement might be defi ned as the number of clubs, groups, or associations to which a person belongs.

In the prior example, Dick Murray felt that the degree of success attained by federal programs is determined by the level of funding rather than by local involvement. He operationalized these concepts as the size of the federal allot- ment to each city in the program, and the level of interest and training of the local mayor and staff in the program, respectively. Program success is a diffi cult concept to operationalize, but Murray might have used a variety of indicators:

the perceived satisfaction with the program (assessed in a survey) felt by public housing residents and by the larger community, the number of new housing units constructed, the cost per square foot, the degree to which the housing met fed- eral standards, and so forth. He would be using a multiple indicator strategy, as explained—and recommended—in Chapter 2. By comparing the cities of Bison and Virtuous, he was then able to demonstrate that although local involvement

seemed to make no difference in the success of the senior citizens program (because the cities rated about the same on this dimension), the level of funding (on this factor the cities diverged) did make a diff erence.

Once a concept has been operationalized and measured in a sample of data, it is called a variable. A variable assigns numerical scores or category labels (such as female, male; married, single) to each case in the sample on a given character- istic (see Chapter 2 for a discussion of measurement). For example, in a study examining the job-related behavior of bureaucrats, a researcher may obtain data for fi ve bureaucrats on the variables “race,” “sex,” “time wasted on the job per day” (minutes), and “attitude toward the job” (like, neutral, dislike). Th ese data are displayed in Table 3.1, with each column representing a variable.

Often in research, variables are classifi ed into two major types. It is the goal of most research to explain or account for changes or variation in the dependent variable. For example, Dick Murray sought to explain the failure of the senior citizens program in Bison (and, implicitly, to understand how the program might be made a success). What factors could account for the diff erent outcomes of the programs in Bison and Virtuous? A variable thought to lead to or produce changes in the dependent variable is called independent. An independent vari- able is thought to aff ect, or have an impact on, the dependent variable. Murray examined the eff ects on program success of two independent variables: “interest and training of mayor and staff ” and “federal funding.” For terminological con- venience, independent variables are sometimes referred to as explanatory, predic- tor, or causal variables, and the dependent variable as the criterion.

To account for changes in the dependent variable, researchers link the cri- terion explicitly to independent variables in a statement called a hypothesis. A hypothesis formally proposes an expected relationship between an independent variable and a dependent variable.

The primary value of hypotheses is that they allow theoretical ideas and explanations to be tested against actual data. To facilitate this goal, hypotheses must meet two requirements. First, the concepts and the variables that they relate must be measurable. Although propositions connecting unmeasurable concepts

Table 3.1 Data for Five Bureaucrats

Variable

Case Race Sex Time Wasted on Job

(minutes) Attitude toward Job

Bureaucrat 1 White Female 62 Dislike

Bureaucrat 2 White Male 43 Neutral

Bureaucrat 3 African American Male 91 Like

Bureaucrat 4 Hispanic Male 107 Like

Bureaucrat 5 White Female 20 Dislike

(or those for which data are currently unavailable) are important in research, they cannot be evaluated empirically and must be treated as assumptions. Second, hypotheses must state in precise language the relationship expected between the independent and dependent variables. For example, two hypotheses guided the research of Dick Murray:

1. Th e greater the local involvement in a program, the greater is the chance of program success.

2. Th e greater the level of federal funding for a program, the greater is the chance of program success.

In both hypotheses, the expected relationship is called positive, because in- creases (decreases) in the independent variable are thought to lead to increases (decreases) in the dependent variable. Note that in a positive relationship, the variables move in the same direction: More (less) of one is thought to lead to more (less) of the other. In contrast, a negative or inverse relationship proposes that increases in the independent variable will result in decreases in the dependent variable, or vice versa. In a negative relationship, the variables move in opposite directions: More (less) of one leads to less (more) of the other. Th e following hy- pothesis is an example of a negative relationship:

3. Th e higher the degree of federal restrictions on administering a program, the less is the chance of program success.

Once the concepts specified by these hypotheses have been operational- ized (measured), data can be brought to bear on them to evaluate the degree of empirical support for the anticipated relationships.

Hypotheses intended to provide an explanation for a phenomenon of interest (for example, success or failure of federal programs) most often propose a positive or a negative relationship between an independent and a depen- dent variable. One merit of this procedure is that because the direction of the relationship (positive or negative) is made explicit, it is relatively easy to determine the degree of confi rmation for (or refutation of ) the hypothesis.

However, researchers are not always this circumspect in stating hypotheses, and it is not unusual to fi nd in the literature examples in which the inde- pendent variable is said to “aff ect,” “infl uence,” or “impact” the dependent variable, without regard for direction. Th is practice not only condones impre- cision in thinking and hypothesis formulation but also creates diffi culties in assessing empirical support for a hypothesis. For these reasons, this practice should be avoided.

An integrated set of propositions intended to explain or account for a given phenomenon is called a theory. Th e propositions link the important concepts together in anticipated relationships so that the causal mechanisms underlying the phenomenon are elucidated. In some of these propositions, it will be pos- sible to operationalize the concepts and collect the data necessary to evaluate the hypothesized relationships empirically. However, as a consequence of diffi culties

in measuring some concepts or lack of available data, it may not be possible to test all propositions. Untested propositions constitute assumptions. Although a ssumptions lie outside the boundaries of empirical testing, they should not be accepted uncritically. Evidence from past research, as well as logical reasoning, can be used to assess their validity. For example, budget projections based on an assumption that all Americans will give contributions to charitable organiza- tions in the next year are unreasonable. Also untenable is an assumption that the amount of volunteer hours will quadruple next year, even if a nonprofi t needs them.

A full-blown theory to explain an important phenomenon, such as repre- sentativeness in public bureaucracy, relationships between political appointees and career civil servants, or the eff ect of volunteering on building social capital, can be highly abstract and comprehensive. It may contain numerous assumptions not only about how concepts are related but also about how they are measured.

Because of diffi culties in specifying and operationalizing all relevant elements and collecting all necessary data, theories are rarely, if ever, tested directly against actual data or observations. Instead, a simplifi ed version of the theory called a model is developed and put to an empirical test. Usually the relationships pro- posed by the theory are depicted as a set of equations or, equivalently, as an arrow diagram; each arrow represents a hypothesis (see Figure 3.1). Regardless of the form, the model is a compact version of the theory intended for an empirical test.

It identifi es the essential concepts proposed by the theory and the interrelation- ships among them, and it presumes the existence of adequate measurement and relevant data. Ultimately, it is the model that is tested statistically, but the results will be brought to bear indirectly on the validity and utility of the theory.

As an example of a model, consider again the question of the determinants of success of social programs. Perhaps the activity of clientele groups leads to high levels of federal funding, which, in turn, increase the likelihood of program success. In addition, the activity of clientele groups may help to achieve program success directly through support at the local level. Federal funding, however, may be a mixed blessing. Although high levels of funding may enhance the prospects of success, they may also bring increased federal restrictions on administration of the program. Th ese restrictions may reduce the chance of program success. Th is heuristic theory is by no means complete. To yield a more satisfactory explana- tion, we might want to add several components, such as the interest of elected offi cials in the program (recall the example of Bison, Kansas, and Dick Murray with which this chapter began), the organizational structure of the program, and how well the means of achieving program success are understood (for example, is- suing checks for unemployment compensation versus improving mental health).

Th e relationships with program success that have been proposed formally here can be displayed schematically in an arrow diagram, in which each arrow repre- sents an expected causal linkage between a pair of concepts. A negative sign above an arrow indicates an inverse relationship, and an unmarked arrow corresponds to a positive relationship. Figure 3.1 presents the resulting heuristic model of the success of social programs.

To the extent that data confirm the relationships proposed by a theory and tested in the model, support is obtained for the theory, and its validity is enhanced. Analogously, the failure to fi nd empirical support for a theory detracts from its validity and suggests the need for revisions. However, researchers usually are more willing to subscribe to the fi rst of these principles than to the second.

Whereas they rarely question data that off er support for their theoretical ideas, in the face of apparent empirical disconfi rmation, revision of the theory is seldom their immediate response. Instead, operational defi nitions are critically evaluated;

possible problems in the selection, collection, and recording of the sample of data are considered; numerical calculations are checked; and so on. As a result of the state of the art of social science research, these are reasonable steps to take prior to revising the theory. However, if these measures fail to lead to alternative explanations for the lack of empirical support obtained (such as a biased sample of data) or possible procedures to amend this situation (for instance, by devising new operational defi nitions for some concepts), the researcher has no choice but to reconsider and revise the theory.

Th e revision usually leads to new concepts, defi nitions, operationalizations, and empirical testing, so that the process begins all over again. Generally, it is through a series of repeated, careful studies, each building on the results of earlier ones, that researchers come to understand important subject areas.

In public and nonprofi t management, you are unlikely to develop a theory.

Often, however, you will propose a model, obtain the necessary data, and test it using the statistical methods you will learn in this book.

Figure 3.1 Arrow Diagram and Model

Activity of Clientele Groups

Level of Federal Funding

Program Success

Federal Restrictions on Administration of a Program