• Tidak ada hasil yang ditemukan

Individuation and Decategorization

What happens when we acquire individuating information about a category member? A number of thoughtful models have been proposed to account for the complex relation between individuating and categorical information (Brewer, 1988; Fiske & Neuberg, 1990; Locksley, Hepburn, & Ortiz, 1982; Nisbett, Zukier, & Lemley, 1981; Rothbart &

John, 1985). This relation is important because one putative effect of sustained contact with a category member is the acquisition of information that is either irrelevant to or incongruent with the category. This newly acquired information about a category member should result, through generalization, in stereotype change. Indeed, at least two models hold individuation/decategorization to be an important component of stereotype change (Brewer & Miller, 1984; Pettigrew, 1998).

According to Rothbart and John, individuation and generalization may work against each other to produce stereotype change. The same counter-stereotypic behavior that dis- confirms the stereotype also reduces the goodness of fit between category and category member, making generalization less likely. Qualms about the efficacy of personalization and decategorization in producing stereotype change are shared by Brown and Turner (1981), Hewstone and Brown (1986), and Brown et al. (1999), albeit for somewhat dif- ferent theoretical reasons. Change at the interpersonal level is not necessarily mirrored by change at the intergroup level, and indeed there is reason to think that decategorizing an outgroup member may inhibit stereotype change.

Two recent sets of studies examine the effect of individuating information on cate- gorical judgments. The first re-examines the interesting work by Nisbett, et al. (1981) on the dilution effect. Nisbett et al. argued that information nondiagnostic of a target behav- ior could none the less dilute the predictive power of diagnostic information. For example, consider a target behavior “child abuse,” which is predicted by “having a drinking problem,” but not by “managing a hardware store.” The category “having a drinking problem” is diagnostic of child abuse, but the behavior “managing a hardware store” is not. Nisbett et al. found that the addition of nondiagnostic information systematically decreased (diluted) the predictive strength of diagnostic information. Tversky’s (1977) features of similarity model, which assumes similarity to be a function of common minus distinctive features, was invoked to account for dilution. The addition of nondiagnostic information increases the number of distinctive features, and thus reduces perceived similarity.

Peters and Rothbart (2000) argued that the dilution effect may be intimately related to category-exemplar dynamics, rather than to Tversky’s features of similarity. Specifically, they argued that the nondiagnostic information may have influenced the perceived strength of the diagnostic information, which in turn influenced prediction. Thus, a person who “has a drinking problem andwho manages a hardware store” may be viewed as less of an alcoholic than a person who is simply described as “having a drinking problem.” Peters and Rothbart argued that the specific nondiagnostic behaviors used may have inadvertently reduced the strength of the diagnostic category by making the exem- plar less typical of the category. If so, then it ought to be possible to create nondiagnos- tic behaviors that increase, decrease, or do not change the typicality of the stimulus person vis-à-vis the diagnostic category. Peters and Rothbart provided subjects with either 1, 3, or 5 pieces of nondiagnostic, individuating information that were either typical, atypical, or unrelated to the diagnostic category. Consistent with the Nisbett et al. research, as the amount of atypical information increased, the impact of the diagnostic information decreased, entirely consistent with the dilution effect. However, contrary to the dilution effect, as typical nondiagnostic information increased, enhanced categorical prediction – the opposite of dilution – was obtained. With irrelevant nondiagnostic information, there was no change in prediction as the amount of that information increased. Thus the crit- ical variable – the typicality of the categorical information – seemed to be an essential ingredient in determining whether dilution would or would not occur.

In general, however, it was easier to dilute than to enhance the diagnostic category.

Although there were a number of possible reasons for this finding, one explanation is based on the nature of categorical representation, as discussed earlier (Lakoff, 1987). A pure category alone may already contain a large number of typical elements, making it difficult, for example, to make an “alcoholic” even more alcoholic, although it is relatively easy to reduce the degree of alcoholism by including incongruent elements. This means that dilution may be the most frequently observed effect – as originally argued by Nisbett et al. – but for reasons related to the dynamics of stereotyping rather than to Tversky’s model of similarity.

Are mothers women?

Given the idealized nature of category labels, in which features consistent with the cate- gory predominate, it is probable that the addition of individuating information is more likely to weaken than strengthen the link between category and exemplar. Although at first blush the ease of “decategorizing” a category member may seem desirable, since it

“releases” the individual from the tyranny of the category, it may be undesirable from the point of view of generalization.

There is a story told by an eminent developmentalist studying the gender role expec- tations of young girls whose mothers were employed in highly atypical occupations. In one case, where the mother’s job was driving a tractor-trailer cross country for a large trucking company, the daughter provided a list of the usual occupations appropriate for women: Secretaries, nurses, librarians, etc. When asked whether women could be truck drivers, the daughter said no. When the interviewer called attention to her mother’s own

occupation, the daughter commented that “that is my mother, that is not women.” This anecdote illustrates the inverse relation between the amount of individuating informa- tion and the amount of generalization. Those category members about whom we have the greatest amount of individuating information, and who could potentially exert the greatest force for humanizing our abstract impressions of social categories, may well be those individuals whose attributes are least likely to generalize to the category as a whole.

An interesting experiment that examines this question directly was conducted by Scarberry, Ratcliff, Lord, Lanicek, and Desforges (1997). They created an experience with a category member (a male homosexual) under conditions that were most likely to show generalization to the category as a whole. The factors identified by Allport to facil- itate contact-induced stereotype change were included in the present experiment. Sub- jects interacted cooperatively with a confederate under equal-status conditions to work interdependently on a task to achieve a mutually desired goal, and under conditions in which the confederate was viewed in a highly favorable way. On the cooperative task, the homosexual confederate provided help to the subject through the use of analogies, but the nature of the analogies was varied experimentally. For some subjects the analogies were given in the abstract, such as “like when someone squeezes every bit of toothpaste out of the tube”; in the other condition the analogies were given as self-examples, such as “like when I squeeze every bit of toothpaste out of the tube.” Subjects’ attitudes toward homosexuals and toward three other stigmatized groups were assessed both pre- and post-contact, and attitudes toward the homosexual confederate were also assessed. Con- federates who used abstract or self-based analogies were both highly liked and, most importantly, equally well liked. However, favorable attitudes toward homosexuals in general were greater in the condition where the confederate used abstract, rather than personal, analogies. A nice feature of this experiment is the careful control of the nature of the information presented to subjects. The informativeness of the analogies was virtu- ally the same across conditions, and only the referent (abstract vs. self ) varied. When the analogies referred to the personal behavior of the homosexual confederate, there was less generalization than when the analogies were abstract and unrelated to the confederate. It appears that the individuating information, seemingly quite unrelated to the stereotype of homosexuals, succeeded in isolating the individual from the category.

There may be at least two ways to interpret the findings of Scarberry et al. One pos- sibility is that some of the analogies used (e.g., woodworking) may actually be slightly disconfirming of the stereotype of homosexuals, and those analogies when personally asso- ciated with the confederate made him less typical of the category. This interpretation would be consistent with Peters and Rothbart’s interpretation of Nisbett et al.’s dilution findings. It is also possible, however, that even mundane, truly stereotype-irrelevant infor- mation can reduce the goodness of fit between category and exemplar, particularly for

“strong” categories, that is, categories which carry strong implication or have high “induc- tive potential” (Rothbart & Taylor, 1992). Strong categories, such as homosexual, often connote a highly limited, and evaluatively potent set of behaviors, and it is possible that even common, mundane, nondiagnostic behaviors may function to reduce the goodness of fit between category and exemplars for such strong categories.

When Allport (1954) wrote that some labels are “. . . exceedingly salient and power- ful. They tend to prevent alternative classification, or even cross-classification. . . . ‘labels of primary potency’ . . . act like shrieking sirens, deafening us to all finer discriminations

that we might otherwise perceive,” he was referring to such strong categories, and his intuition is compelling: Once these categories are applied to individuals they may inhibit the application of other, even independent or neutral categories. Saltz and Medow (1971) found that young children have difficulty in applying more than one social category to an individual, and the same may be true for adults when the categories are strong (that is, have high inductive potential). For very strong categories, any individuating informa- tion that is not directly implied by the category label may serve to decategorize the indi- vidual by making the exemplar a less good fit to the category. If so, the fact that almost any information can liberate the individual from the stigma of category membership may be received as good news, but again the downside is that the unrealistically extreme image of such categories may remain unperturbed by the reality of the members who make up that category.

To summarize the argument thus far, there are powerful categorization processes that work against contact-induced stereotype change. The same processes that individuate the category member – that is, that distance the member from the category – work against generalization from the individual to the category. The limitations of contact-induced stereotype change are not in the author’s view a moral imperative, but a theoretically derived prediction with considerable empirical support. Indeed, the arguments made can be used to inform research on contact to increase the likelihood of stereotype change, as shown by Hewstone and Lord (1998). Most generally, any techniques that remind an observer that atypical group members are none the less category members should increase the likelihood of generalization. The problem of intimately known outgroup members is particularly vexing, however, since high levels of individuating information about outgroup members may lead them to be only weakly associated with the category label.

Emphasizing their category membership, and particularly their category attributes that are typical, may be of value, but the power to compartmentalize poor-fitting group members should not be underemphasized (cf. Kelman’s (1992) work on small-group interactions between Jews and Arabs in Israel).

Our discussion thus far has been focused on the complex relation between a category and its members, and we now wish to turn to the basic question of how we define, explic- itly or implicitly, the nature of category membership through the placement of category boundaries.

The nature of group boundaries

Any discussion of ingroup–outgroup relations accepts as a basic premise the importance of a boundary dividing one’s own group from others. There is little need to remind social psychologists raised in the tradition of Asch (1948) and Lewin and Grabbe (1945) that the importance of such boundary markers lies, not necessarily in the reality of the exter- nal world, but in the phenomenal representation of that world. Two studies dealing with issues of boundary markers, one explicitly and the other implicitly, will be summarized and used to speculate on the relation of such boundary markers to the modification of outgroup stereotypes.

A study by Rothbart, Davis-Stitt, and Hill (1997) examined the impact of category labels and visual boundary markers on similarity judgments. In one experiment, they

presented subjects with pairs of names of male actors located along a continous, percentile scale of political liberalism (e.g., Alan Alda was located at point that makes him more liberal than 85% of the population). Each subject judged the similarity between a number of pairs, some placed 10 units apart and others placed 15 units apart. The presence of boundary markers at the quartiles of the scale was systematically varied across subjects.

Boundary markers were either verbal labels (e.g., “moderately conservative,” “moderately liberal”) or visual markers (solid vertical “tick” marks). One group received a continuum with no markers of any kind (a baseline control), another received a continuum with both verbal and visual boundary markers, and in the two other conditions there was the pres- ence of one type of boundary marker but not the other. The results were clear: A given pair of actors was judged most similar to each other when no boundaries existed between them, and perceived similarity decreased as a function of the number of interposed boundary markers. The effects of the verbal and visual boundary markers were each sig- nificant and additive. Thus, even though the verbal and visual markers added nothing to the underlying reality of the scale, both types of markers – when interposed between stim- ulus persons – decreased the perceived similarity between them. A comparison between the baseline control condition and the condition with two boundary markers present indi- cates that there were two effects of category boundaries: 1) To increase the perceived similarity within categories, and 2) to decrease the perceived similarity between cate- gories. These two effects constitute the phenomenon Turner, Hogg, Oakes, Reicher, and Wetherell (1987) have called “metacontrast” (cf. also Tajfel & Wilkes, 1963).

The design of the experiment also allowed a comparison between “reality” and cate- gorization. Consider two different types of pairs: Those separated by 10 units and those separated by 15 units. The unit distance between the pairs represents the “reality” of the scale, and the judgments in the control baseline condition, where no boundaries are present, appropriately show greater similarity for the 10 unit than for the 15 unit pairs.

Now consider the case in which the actors separated by 10 units are in different cate- gories, and those separated by 15 units are present in the same category: Now the 15 unit pairs are judged more similar than are the 10 unit pairs. At least for this range of scale values, categorization is able to override the effects of reality. Category boundaries appear to be treated as informative even when they add little or no information to what is already known about the objects subsumed by the categories. If the opening example from that article is paraphrased, it is as if a farmer living at the boundary of Poland and Russia, after learning that his house is just inside the Polish border, exclaims with relief, “Thank God, no more Russian winters!” Similar findings have been obtained by Allison and Messick (1985) and Mackie and Allison (1987) in the context of group attributions, where the criterion for electoral success is varied.

Rothbart et al. explicitly manipulated the presence of boundary markers and exam- ined the effects on perceived similarity. A study by Maurer, Park, and Rothbart (1995), although not explicitly designed to examine the effects of category boundaries, may use- fully be interpreted in this way. Maurer et al. tested some important ideas put forth earlier by Park, Ryan, and Judd (1992) about subgrouping and perceived intragroup variability, and considered how subtyping may differ from subgrouping. Earlier work by Park et al.

showed that subjects who first thought about the characteristics of the subordinate groups that make up the larger superordinate group then rated the superordinate group as less

homogeneous. This phenomenon, referred to as subgrouping, has important implications for modifying the perceived complexity/heterogeneity of outgroups. How, then, does subgrouping differ from subtyping, where poor-fitting members of a category are also relegated to a subordinate category resulting in a morehomogeneous view of the super- ordinate group (e.g., Johnston & Hewstone, 1992; Weber & Crocker, 1983)? The goal of the Maurer et al. research was to determine whether the creation of subgroups and subtypes – both of which involve the establishment of subordinate categories – leads to different effects on the perceived strength of the stereotype.

Subjects were given information about a group of 16 stimulus persons involved in a Big Brother program, where the group members varied in typicality. One group was given this information without any instructions (the control condition), and then were asked to make typicality judgments as well as judgments about the group as a whole (includ- ing judgments of intragroup variability). A second group was given “subtyping” instruc- tions to sort the group members into those who fit and those who do not fit the image of the group before making the same judgments as did the control condition. A third group was given “subgrouping” instructions to sort the members into as many piles as they wished, trying to minimize differences within subgroups while maximizing differ- ences between subgroups before making their judgments. Compared to the control con- dition, the subtyping condition had a more stereotyped and homogeneous image of the target group, while the subgrouping condition had a less stereotyped and more hetero- geneous image. We believe the reason for this was that the subjects were implicitly drawing the group boundaries differently for the different conditions. In the subtyping condition, atypical members were functionally excluded from group membership, a conclusion which was supported by a mediational analysis based on typicality ratings; in the sub- grouping condition, the boundaries enclosed all of the subgroups, and the heterogeneity of the included groups was apparent. Stated differently, the same atypical stimulus persons were functionally excluded from the group representation in the subtyping condition, while they were included in the subgrouping condition.

There are two potentially important implications of this research. One is that the per- ception of typicality is influenced not only by the “computation” of matching features between category and exemplar, but also by the processing goals of the subjects. Context can determine whether subjects do or do not include the atypical members in their implicit calculations of group impressions, and this is an important phenomenon that needs to be better understood. In this research, the subtyping and subgrouping instruc- tions had very different effects on how subjects thought about the relation of group members to the group as a whole. The relatively simple dichotomous judgments (good fit vs. poor fit) required by the subtyping instructions may have led subjects to view group members in a unidimensional way, promoting an exaggerated difference between the attributes of good- and poor-fitting members (in comparison to the control condition).

In contrast, the more complex subgrouping instructions, which encouraged subjects to examine similarities and differences between and among individual stimulus persons and between and among subgroups, may have had at least two important consequences. First, subjects may have been led to realize that each stimulus person was a complex set of attrib- utes some of which fit and some of which did not fit the stereotype. In contrast to the subtyping instructions, the multiattribute nature of the stimulus persons was emphasized,