A Review and an Empirical Investigation of the Evaluation Formation Process
G
INAL. M
ILLER, N
ARESHK. M
ALHOTRA,
ANDT
RACEYM. K
INGAbstract
There is great need for a comprehensive, theoretically sound model of the product perception or evalu- ation formation process in the discipline of marketing. From a review of the categorization literature, a categorization-based model of the product evaluation formation process was developed, which as- sists in the prediction of set membership (i.e., evoked, inert, or inept). A summary of the theoretical background, as well as a description of the model is presented, and a research design is used to empirically test the category-based processing and piecemeal processing branches of the model. Through a survey of undergraduates, various methodologies were tested with regard to their abilities to predict set membership. The findings suggest that powerful tools exist for the prediction of set membership.
Conclusions regarding the theoretical and methodological contributions of this study are drawn, and managerial implications of the findings, as well as recommendations for future research are discussed.
A firm will gain a distinct advantage in the market if it is able to determine whether its product is included in a consumer’s evoked set, and if not, where its product stands in relation to the evoked set. To assist in this task, researchers need to develop a model of the product evaluation formation process, because once this process is understood, it can be used to predict set membership. This model development and the demonstration of its subsequent power for set prediction (i.e., predic- tion of whether a product is contained in a consumer’s evoked, inert, or inept set) are the main contributions of this chapter.
These goals are accomplished by first giving a brief overview of the evoked set concept and its relationship to categorization theory. Then, the literature on categorization theory is reviewed, which provides a theoretical framework for the product evaluation formation (EF) model. An empirical research design for testing the set prediction ability of the model is laid out in two parts:
category-based processing and piecemeal processing. The procedures for carrying out the analy- ses of these two parts of the empirical investigation are outlined, and the results are discussed.
Finally, conclusions are drawn, contributions made by the study are highlighted, and recommen- dations for future research are identified.
Categorization Literature Review
The categorization approach to information processing posits that consumers store information in memory around a set of category expectations (Rosch, 1975; Rosch and Mervis, 1975; Rosch et al., 1976). Categorization itself involves the comparison of a marketing stimulus (e.g., brand or product) with knowledge stored in memory (see Basu, 1993). A consumer’s knowledge store is comprised of a variety of groups, or “sets,” of brands. The evoked set can be thought of as a subset of the awareness set—the brands a consumer is aware of or expects to be a member of a certain category. Once a consumer is aware of a brand, it can fall into one of three subsets: the inert set, the inept set, or the evoked set. The inert set is comprised of those brands for which the buyer has a neutral evaluation (i.e., he does not think positively or negatively about them). This may be because he does not have sufficient information to make such a judgment or because he has no reason to try them. The brands that have been rejected by the consumer, owing to such factors as a bad experience with the brand or exposure to negative information, make up the inept set. The evoked set is then the set of brands that the consumer evaluates positively and considers for purchase. In order to simplify their decision-making processes, consumers categorize the brands available in the marketplace into one of these three sets (Narayana and Markin, 1975; see also Brown and Wildt, 1991; Nedungadi, 1990; Turley and LeBlanc, 1995).
When faced with a choice, consumers recall from memory the products that would fulfill their needs in a positive manner (the evoked set) and make their final choice from this set (Ratneshwar and Shocker, 1991). If a product is not included in the consumer’s evoked set, then it will not be chosen.
This realization holds considerable implications for the application of categorization theory to product evaluations, since understanding how consumers categorize or make product evaluations would be invaluable to marketers. In other words, “Categorization theory is particularly relevant to understanding consumer behavior because consumers face a complex choice environment re- plete with brands having both shared and unique features; consumers may use categorization to simplify and structure their environment” (Sujan and Tybout, 1988, p. 50). In fact, in recent years consumer researchers have investigated a variety of issues in product categorization—for ex- ample, consumers’ mental representations of product categories (e.g., Loken and Ward, 1990;
Ratneshwar and Shocker, 1991; Sujan and Bettman, 1989), the relation between categorization and consideration sets (e.g., Hutchinson, Raman, and Mantrala, 1994; Nedungadi, 1990;
Ratneshwar, Pechmann, and Shocker, 1996; Ratneshwar and Shocker, 1991), and how categori- zation affects consumers’ preferences and choices (e.g., Goodstein, 1993; Meyers-Levy and Tybout, 1989; Stayman, Alden, and Smith, 1992; Sujan, 1985). Basically, a strong understanding of cat- egorization theory assists in making accurate predictions regarding how consumers interpret and respond to marketing stimuli (Sujan and Dekleva, 1987; Sujan and Tybout, 1988).
It is essential to review the categorization literature when conducting any type of information processing research since the grouping of similar concepts (i.e., categorization) results in en- hanced information processing efficiency and cognitive stability (Cohen and Basu, 1987; Corter and Gluck, 1992; Ozanne, Brucks, and Grewal, 1992). For the purposes of this chapter, two main classes of categorization models are examined: similarity-based models and theory-based mod- els. Then a third type, known as mixed models, is discussed in an effort to integrate the two previously mentioned classes of models (see Figure 4.1).
Similarity-Based Categorization Models
Most theoretical accounts of categorization are built around the assumption that similar objects are grouped together; thus, similarity is often assumed to be the primary determinant of category
THE EVALUATION FORMATION PROCESS 111
representations (Murphy and Medin, 1985; Rosch and Mervis, 1975; Smith and Medin, 1981).
Similarity-based models take a bottom-up or stimulus-based view of categorization where the perception of salient features of stimuli is critical in determining category membership (Ratneshwar et al., 2001). The three most commonly employed similarity-based models of the categorization process are the classical view, the prototype view, and the exemplar view (Medin and Smith, 1984; Smith and Medin, 1981).
Classical View
The classical view is based on two assumptions. The first is that “the representation of a concept is a summary description of an entire class, rather than a set of descriptions of various subsets or exemplars of that class.” Second, “the features that represent a concept are singly necessary and jointly sufficient to define that concept. For a feature to be singly necessary, every instance of the concept must have it;
for a set of features to be jointly sufficient, every entity having that set must be an instance of the concept” (Smith and Medin, 1981, p. 23). These assumptions imply that all category members are equally representative of the category, and that category membership is an “all or nothing” prospect. In other words, if a given concept lacks even one of the critical features of a category, it cannot be a member of that category. If this nonconformatory situation exists, one of two things must occur: the set of critical features for the category must be modified or the concept must be placed in another, possibly new, category. Since many thought this view too restrictive, the idea that concept representations are based on properties that are only characteristic or typical of category exemplars evolved, which is known as the probabilistic view. This shift is considered to be the first of two major shifts that have occurred in categorization theory and is illustrated by the prototype view (Medin and Coley, 1998).
Prototype View
The prototype view may be described by the following: “The representation of a concept is a summary description of an entire class; and the representation of a concept can not be restricted to a set of necessary and sufficient conditions; rather, it is some sort of measure of central tendency Figure 4.1 Categorization Models
CATEGORIZATION MODELS
Theory-Based Models Similarity-
Based Models
Classical Prototype Exemplar
Featural Dimensional Holistic Summed
Similarity Mixed Models
Empirical Evidence:
1) Conceptual Coherence 2) Goal-Derived Categories 3) Concept Instability 4) Frame or Schema Models
Surface similarity Deep similarity
Context effects
of the instances’ properties or patterns” (Smith and Medin, 1981, p. 61). In other words, prototype models assume that the stored category representation is a summary of the most typical feature values for members of a category, and that new stimuli are classified on the basis of their similari- ties to these prototypes (Hampton, 1995). The relevant literature encompasses three different representations of a prototype: featural, dimensional, and holistic.
Featural Approach. The critical assumption of the featural approach is that “the features that represent a concept are salient ones that have a substantial probability of occurring in instances of the concept” (Smith and Medin, 1981, p. 62). Under this approach, each feature associated with a category is accompanied by a weight that reflects the saliency of the feature, thus making this an analytical approach. For example, features such as “feathered” and “winged” would have high weights for the concept of bird, since they always occur on category members. Less prevalent, or salient, features such as “flies” and “sings” would have lower weights.
Dimensional Approach. The dimensional approach is also analytical, and its critical assump- tion is that “any dimension used to represent a concept must be a salient one, some of whose values have a substantial probability of occurring in instances of the concept; and the value of a dimension represented in a concept is the (subjective) average of the values of the concept’s subsets or instances on this dimension” (Smith and Medin, 1981, p. 103). Thus, dimensions are assigned weights, as in the featural approach. The main distinction between this approach and the featural approach is that the dimensions can be physically or psychologically continuous (i.e., size)—they do not have to be discrete (i.e., winged) as in the featural approach.
Holistic Approach. The third and final approach is the holistic one, which is nonanalytic. In this case, a prototypical concept is represented by a single overall representation of the concept.
Smith and Medin (1981) treat the holistic approach as a template theory. A template is isomor- phic—it looks like the object it represents (i.e., it is similar to a picture of the object). For ex- ample, if the concept of cup was represented in template form, the feature “has a handle” could not be singled out and analyzed alone, since the handle is incorporated into the isomorphic repre- sentation of the cup; however, the relationship between the handle and the cup would be apparent.
In the featural or dimensional approach, “has a handle” would be separable from the overall concept of cup, but the relationship between the handle and the cup would not be inherently apparent (Smith and Medin, 1981).
In spite of, or perhaps because of, the three approaches to prototype theory, the nature of a prototype is vague. Furthermore, the prototype view has a number of limitations. One flaw in- volves prototype theory’s requirement of an abstract, summary representation. It does not explic- itly provide for the incorporation of new information about a concept. In other words, it is unclear
“how the new information is to be combined with it [the prototype] (e.g., some complex type of weighting scheme) depending on various characteristics of the already existing prototype” (Cohen and Basu, 1987, p. 460). Also, central tendency notions claiming the prototype as an “average”
do not explain the systematic biasing effects that have been empirically demonstrated in catego- rization literature (e.g., Cohen and Basu, 1987). Furthermore, Medin (1989) notes that prototype theory treats concepts as context-independent entities and does not account for goal-derived cat- egories, which are based on similarity to an ideal rather than to a prototype (Medin, 1989).
Extension of Prototype View. Rule-bound (decision-bound) models are extensions of the prototype model of categorization. Rule-based models posit that decisions are based on ab-
THE EVALUATION FORMATION PROCESS 113 stracted rules. The rule is derived using a decision bound (e.g., a line or curve) to divide the relevant psychological space, and the resulting regions are each assigned to a specific category (e.g., Ashby and Gott, 1988; Ashby and Maddox, 1990, 1992, 1993). For example, a person might be categorized as “tall” if he or she is perceived as exceeding 6 feet (Rouder and Ratcliff, 2004). In recent years, rule-based theories have been integrated with the exemplar view to produce dual-process or dual-system categorization models (e.g., RULEX; Nosofsky et al., 1994; Nosofsky and Palmeri, 1998), which are discussed after an introduction to the exemplar view of categorization.
Exemplar View
In the exemplar view, concepts are represented by specific category exemplars rather than by an abstract summary (Kruschke, 1992; Nosofsky, 1986; Palmeri and Nosofsky, 1995; Smith and Medin, 1981). Thus, “the more similar the target instance is to concrete exemplars of a category, the more likely it will be placed in that category” (Cohen and Basu, 1987, p.460). The exemplars, which have been previously encountered and stored in memory, retrieved for comparison to the target, will be ones that are contextually relevant (Yang and Lewandowsky, 2003). This indicates that in categorization instances where the concepts are influenced by the context, the exemplar model will predict categorization results more accurately than prototype models. An exception would occur if one assumes that for each specific situation, separate categories with individual prototypes are created. However, this exception is not very likely due to the fact that it would result in a large number of categories and undermine the efficiency of the cognitive system. In addition, the presence of a large number of categories would violate Rosch’s (1975) two compet- ing principles for determining the optimal level of categorization: minimization of the number of categories and maximization of informativeness.
The basic difference between exemplar and prototype theories may be explained by the nature of representational points in the category. Exemplar theory predicts that humans store a variety of exemplars in memory that are spread out in psychological space and act as the standards of com- parison for new stimuli. Taking this perspective, each stimulus that is categorized will be like some exemplars and unlike other exemplars. On the other hand, prototype theory predicts that humans store one representational point at the center of the psychological space and that this prototype becomes the standard of comparison for new stimuli. Thus, stimuli that are categorized using the prototype model will be either like or unlike the one prototype. Smith (2002) conducted a study to test the appropriateness of exemplar theory using the predictions described above (see also Nosofsky and Johansen, 2000; Smith and Minda, 2000, 2002). Specifically, he determined whether or not there was a steep typicality gradient leading up to the prototype, as predicted by prototype theory (i.e., members of categories represented by prototypes will be either marginal or exceptional examples), or if there was not a steep typicality gradient, as predicted by exemplar theory (i.e., members of categories represented by exemplars will all be about equally good ex- amples). The results show that while prototype theory’s prediction was clearly supported, exem- plar theory’s prediction was not. However, exemplar models have been found to outperform prototype models in other empirical comparisons (Ashby and Maddox, 1998; Niedrich, Sharma, and Wedell, 2001). As discussed in a recent article by Minda and Smith (2001), individuals may be more likely to follow the prototype model of categorization when they are not very familiar with specific exemplars or when the stimuli are complex. These results support a mixed models approach, which is similar to the convergence of rule-based and exemplar-based strategies as discussed in the following section.
Extension of Exemplar View. A stream of literature by Nosofsky (Nosofsky, 1986, 1988a, 1989; Nosofsky, Shin, and Clark, 1989; based on the groundwork previously laid by Estes, 1986;
Medin and Schaffer, 1978) attempts to justify the exemplar-based model empirically via a “summed similarity” approach. Nosofsky’s work is primarily aimed at refuting the claims that the exemplar model is not a valid one, since high recognition does not necessarily lead to high categorization.
He argues that the exemplar model consistently fits empirical data when a summed similarity approach is used.
The key assumption of the summed similarity approach is that categorization is determined by the relative degree of target-category to contrast-category similarity; whereas recognition, or fa- miliarity, may be determined by the overall summed similarity of a probe to all exemplars stored in memory. For this reason, categorization and recognition may often be based on common rep- resentational substrates, but different decision rules may underlie performance in each task:
Categorization Probability:
( ) ( )
ik k
ij j i
j Σ ΣNs
s S ΣN
R
P =
Recognition: Fi = Σ
(
ΣNksik)
where the probability that stimulus i is categorized in category j, is found by summing the similar- ity of stimulus i to all exemplars j belonging to category j and then dividing by the summed similarity of stimulus i to all exemplars of all categories, k. P(Rj|Si) is a measure of the “strength”
of making a Category J response (Rj) given the presentation of Stimulus i (Si). N represents the relative frequency of exemplar j, and s represents the similarity between exemplars i and j or k.
Recognition or familiarity is given by F (Nosofsky, 1988a).
The above model is expanded on in a paper by Nosofsky, Shin, and Clark (1989), which employs the assumption that similarities among exemplars are modifiable by selective attention.
Selective attention is represented in terms of stretching and shrinking distances in psychological space. Each exemplar is thought of as a point in psychological space, and the difference between the coordinates of the points of two exemplars is used to calculate psychological distance. This distance is then converted to a similarity measure, so that the similarity between two exemplars can be determined. The selective attention exemplar model may “represent a compromise, in which specific experiences are stored; but with attention focused on those aspects of exemplars that appear most relevant to categorization” (Nosofsky, Shin, and Clark, 1989, p. 300).
Recently, a number of researchers have proposed multiple-process or multiple-system frame- works, where both exemplar- and rule-based processing are assumed to occur under the same model (e.g., Ashby and Ell, 2002; Erickson and Kruschke, 1998; Nosofsky, Palmeri, and McKinley, 1994; Vandierendock, 1995). According to these models, people identify rules that accurately predict the classification of most objects; however, these rules are augmented by the storage of a set of exceptions or unique feature combinations. Thus, if the use of a rule-based strategy fails or produces an unacceptable amount of errors, then people may retrieve one of a number of excep- tions stored in memory. Nosofsky and colleagues’ rule-plus-exception model (RULEX) is one example of this type of model and accounts for a categorization process that uses both rules and stored exemplars (Nosofsky, Palmeri, and McKinley, 1994; Nosofsky and Palmeri, 1998). The model is supported by Rouder and Ratcliff (2004) in a study that shows how both rule-based and exemplar-based processing may be used in the same categorization task. More specifically, ex-