Fro a start, two groups are chosen such that they do not differ from each other in significantrespects (which might affect the hypothesized relationship among variables) except by chance.
Any one of these groups may be chosen as the 'experimental' group and the other chosen as the group. The 'experimental' group is exposed to the assumed causal able while the 'control' group is
142
not. The two groups are then compared in tenns of the assumed effect (dependentvariable).The structure makes possible the securing of three types of evidence required for testing causal hypotheses.
Evidence of the first type, i.e., of concomitant variation, is provided very simply in an experiment since the investigator can easily see or determine whether the assumed effect occurs more frequently among subjects who have been exposed to the assumed cause than among those who have not been exposed to it.
Evidence of the second type, i.e. (Y) did not occur before (X), is also easily obtained because the experimental group are chosen in such a way that there is reason to believe that the two groups do not differ from each other, in terms of a referent of (Y) before exposure to (X). Alteratively, the actual before measurements of these two groups afford the basis for saying that the groups did to differ in respect of (Y) before exposure to (X). This being the case, the difference in these two groups alter exposure of one of them to (X) can be said to have decisively followed the experimental variables, i.e., (X).
Evidence of the third type, that which rules out other factors as the possible determining conditions of (Y) may be secured in several ways in an experiment. Such possible causal conditions may be:
(a) factors that have occurred in the past or are more or less enduring characteristics of the subjects;
(b) contemporaneous
events other than the exposure to the
experimental variables:(c) maturational or developmental changes (or changes due to inertia as in physics) in subject of the experiment during the period of the experiment; and
(d) the influence of the measurement process itself.
Different procedures have been evolved to eliminate each of the above type of factors as the possible determining conditions of the effect (Y). These shall be discussed at length when we
Design of Research 143 consider the basic types of experimental design and its ramifications.
The entire design of an experiment has the function of providing for collection of evidence in such a way that influences of causal relationship between the independent and the dependent variables can be legitimately drawn. However, certain aspects are especially important in this regard, viz., the method of selecting experimental and control groups, the points in time when the dependent variable is measured, the pattern of control groups used and number of possible causal variables systematically included in the study.
Let us turn to consider the issue relating to the selection of experimental and control groups when designing an experiment
In any research design that involves comparison of two or
more groups of subjects who have been exposed to different
experimental treatments, there is an underlying assumption that the groups being compared were equivalent before the introduction of the experimental treatment.Clearly, the goal of creating groups that are equivalent in all respects impossible of attainment. Before considering how this problem can be ackled in a practical and satisfactory way, we should be able to appreciate the rationale behind having such equivalent gropes. The first reason is to provide a basis for inferring that the differences which may be found on the dependent variable (assumed effect) do not result from or reflect initial differences between the two groups. This second reason is that of increasing the sensitivity of the experiment by making it possible that small effects of the experimental variables are registered which might possibly be dimmed by the effect of other factors.
The goal of protecting the validity of the experiment by ensuring that the experimental and control groups differ initially only by chance,
is achieved by a procedure termed 'randomization'. The other goal, i.e., increasing the sensitivity of the experiment so that the effects of the assumed causal variable will be clearly discrenible even if they are reactively small and when there are relatively few subjects, is achieved by the 'matching' procedures.
RANDOMIZATION
It involves random assignment of members of a group of subject to experimental and control groups. The assignment procedures must give each subject the same chance of being assigned to any of the alternative groups. This can be achieved by flicking a coin for each subject; e.g., if it falls 'heads' the subject may be assigned to the control gropes. Thus, the procedure is such that in any selection of the subject, researcher's personal judgement is inconsequential.
This does not, of course, mean that the experimental and control gropes will be exactly alike but that whatever differences exist between them prior to exposure to the experimental variable are solely due to chance or the operation of the probability principle. If after one of the groups is exposed to the experimental treatment the two groups are found to differ by more than what could be expected by chance, the researcher may infer that the experimental variable has led to this difference, it must be remembered, however, that this inference can be made only tentatively, subject to the possibility that some other factor may have led to it.
MATCHING
Matching, simply stated, involves pairing the subjects for assignment to the experimental or control group in a manner that a particular types of subject assigned to, say the experimental group, is balanced by assigning its exact counterpart to the control group.
For example, a male, 25 yearsold, living in city and of average intelligence in the experimental group is paired with a city dwelling male of the same age and intelligence in the control group. Matching is typically effective in increasing the sensitivity of the experiment.
Matching ensures that the experiment will reveal true differences
brought about by the experimental variable, although these
(differences) may be small as compared to those produced by other variables.The matching of individuals is understandably a very difficult task for the following reasons:
1. if matching is to be precise and subjects are to be matched
Design ofResearch 145 on several criteria, like, age, sex, nativity, educational status, there must be a large number of cases to select from, in order to achieve an adequate pairing. All of these cases will have to be measured in relevant respects, but only a few can be used. The more precise the matching, greater the number of cases for which no match may be available.
2. It is often quite difficult to know exactly which factors are the most important ones for purposes of matching. Matching with some degree of precision on more than two or three factors is hardly possible.
3. it is often difficult to obtain adequate measure of the covert and intangible factors on which matching is necessaly, e.g., attitudes, intelligence, aspirations, morale, etc. It is obvious that if adequate measures of the assumed relevant factors are not available, matching is likely to be inaccurate.
It is worthy of note that matching is not a substitute for randomization; it is rather a supplement to it. Matching can take account of only a few variables; therefore, those that are unaccounted for but, nevertheless make up the complexion of a group, should be randomly distributed between the experimental and control groups.
Should the research interest be in a functioning collectivity (family, clique, class room, etc.), it is appropriate to match group with group rather than individual with individual.
One method or matching, technically called the frequency distribution control, is an attempt to match an experimental with a control group in terms of the overall distribution of a given factor or factors within the two groups. Fro example, if age was relevant to the effects being studies, frequency distribution control would ensure that the average ages in these two groups (i.e., experimental and control) are alike and the distribution of ages in the two groups is similar. This method is thus an attempt to get some of the advantages of matching case by case without having to incur the cost of losing many cases (unmatched) as we do in the precision control matching.
The frequency distribution control method, however, is not
146 Research Methodology
without its disadvantages. Although distributions on single factors are equated, the groups may actually be badly mismatched on a constellation of these factors. It would be wrong to assume that distributions in two groups are similar simply because their averages are similar. Secondary, even though a statistical test indicates that the distributions in two groups do not differ significantly, the researcher
is not justified in concluding that they are equivalent.
Now will be better to consider the different types of
experimental design.
If the researcher wishes to test the hypothesis that X is the cause of Y, by comparing a group that has been exposed to X with one that has not been so exposed, it is obviously essential to measure the two groups with respect toY, either during or after their exposure to X. Sometimes, it is desirable or even necessary to have in addition, measures of their position with respect to Y before they have been exposed to X. The point of time at which the dependent variable, i.e, assumed effect, is measured provides a basis for classif'ing experiments into two main groupings.
The control groups instituted provide a basis for further sub- classifications.
Experiments
THE 'AFTER-ONLY' TYPE THE 'BEFORE-AFTER' TYPE (Measurement only of the (Measurement of the dependent dependentvariables alter variable before andafter the group exposure to experimental is exposed to the experimental!
variable) independent variable)
Before- Before Before Before-
After After After After
(Single (with one (with two (with group) control control three
group) groups) control
groups)
The After-Only' Experimental Design:
The After -only experiment in its basic outlines may be represented by the following procedure:
Dtcign of Racearch 147
Condition Experimental
Group
Control Group
Before Measurement No No
Exposure to ExperimentalVariables Yes No
Exposure to uncontrolled factors Yes Yes
Atler Measurement Yes (Y2) Yes (Y2)
Change =Y2
-
Y'2The procedure characteristic of the After-only experiments may be described as follows:
(1) Two equivalent groups are selected. Any one may be used as the experimental group and the other as the control group.
As said earlier, the two groups are selected by randomization procedures with or without supplementary 'matching'.
(2) None of these two groups is measured in respect of the characteristic which is likely to register change, consequent to the effect of the experimental variable. The two groups are assumed to be equal in respect of this characteristic.
(3) The experimental group is exposed to the experimental variable (X) for a specified period of time.
(4) There are certain events or factors whose effects on the dependent variables are beyond the control of the experimenter.
Try as hard as he might, he cannot control them. So these factors may be called uncontrolled events. Needless to say, both the experimental as well as control groups are equally subject to their influence.
(5) Theexperimental and control groups arc observed or measured with respect to the dependent variable (Y) after (sometimes, during) the exposure of the experimental group to the assumed causal variable (X).
(6) The conclusion whether the hypothesis, 'X produces Y' is tenable is arrived at simply by comparing the occurrences
148
of Y (or its extent or nature) in the experimental group after exposure to variable X with the occurrence of y in thecontrol group which has not been exposed to X.
In the tabular representation above, Y2 and Y'2 (after measures) are compared to ascertain whether X and Y vary concomitantly.
The evidence that X preceded Y in time, is acquired from the very method of setting up the two groups. The two groups are selected
in such a manner that there is reason to assume that they do not differ from each other except by chance in respect of the dependent variable Y.
The final problem of eliminating the effect of other factors, such as contemporaneous events or maturational process is dealt with on the basis of the assumption that both groups are exposed to the same external events and hence undergo similar maturational or natural developmental changes between the time of selection and the time at which Y is measured. if this assumption is justified, the position of the control group on the dependent variable Y'2 at the close of the experiment includes the influence of external uncontrolled events and natural development processes that have affected both groups. Thus, the difference between Y2 and Y'2 may be taken as an indication of the effect of the experimental variable. It must be borne in mind, however, that the external events and the developmental processes may interact with the experimental variable to change what would otherwise have been its effect operating singly. For example, the effect of a medicine M may be different when the atmospheric conditions or climate interacts with the medicine. Thus, babies may register a greater increase of weight when medicine and climate interact with each other as compared to the increase that may be attributed to medicine (M) and climatic conditions (A) operating on the babies independently.
The major weakness of the After-only experimental design is obvious that the 'before' measurements are not taken. Both groups are assumed to be similar in respect of the before measure on the dependent variables. Unless the selection of the experimental and control groups is done in such a meticulous manner that it warrants
Design of Research 149 such an assumption, it is quite likely that the effect the researcher attributes to the experimental variable may really be due to the initial differences between the two groups. begain, as we shall shortly see, 'before-measurements' are desirable or advisable for a variety of reasons. This facility is lacking in the After-only design.
We cannot afford to overlook the possibility that in certain experimental situations, 'before measurements' are not feasible owing to certain practical difficUlties. Again in certain situations, as we shall have an occasion to appreciate, 'before measurements' may not be advisable and the safeguards quite prohibitive in cost.
Under such circumstances the After-only design may be a reasonably good choice provided, of course, that meticulous care is exercised in selecting the groups as equivalents.