• Tidak ada hasil yang ditemukan

Theoretical approaches

Dalam dokumen Professor Trevor Harley (Halaman 110-114)

Several theories and models of face processing and recognition have been proposed. We will focus on Bruce and Young’s (1986) model, because it has been easily the most influential theoretical approach to face recognition.

Bruce and Young’s model consists of eight components (see Figure 3.14):

1 Structural encoding: This produces various descriptions or representations of faces.

2 Expression analysis: People’s emotional states are inferred from their facial expression.

3 Facial speech analysis: Speech perception is assisted by observing a speaker’s lip movements (lip-reading; see Chapter 9).

4 Direct visual processing: Specific facial information is processed selectively.

5 Face recognition units: These contain structural information about known faces; this structural information emphasises the less changeable aspects of the face and is at a fairly abstract level.

6 Person identity nodes: These provide information about individuals (e.g., occupation, interests).

7 Name generation: A person’s name is stored separately.

8 Cognitive system: This contains additional information (e.g., most actors have attractive faces); it influences which other components receive attention What predictions follow from this model? First, there should be major differences in the processing of familiar and unfamiliar faces. More specifically, various components (face recognition units, person identity nodes, name generation) are only involved in processing familiar faces. As a result, it is much easier to recognise familiar than unfamiliar faces. This is especially the case when, for example, a face is seen from an unusual angle or under unusual lighting conditions.

Second, consider the processing of facial identity (who is the person?) and facial expression (what is he/she feeling?). Separate processing routes are involved with the crucial component for processing facial expression being expression analysis. The key idea is that one processing route (perception of facial identity) is concerned with the relatively unchanging aspects of faces, whereas the other route (facial expression) deals with the more changeable aspects.

Third, when we look at a familiar face, familiarity information from the face recognition unit should be accessed first. This is followed by information about that person (e.g., occupation) from the person identity node and then that person’s name from the name generation component. As a result, it is possible to find a face familiar without being able to recall anything else about the person, or to recall personal information about the person without being able to recall their name. However, a face should never lead to recall of the person’s name in the absence of other information.

Figure 3.15

Simplified version of the Bruce and Young (1986) model of face recognition. Face detection is followed by processing of the face’s structure, which is then matched to a memory representation (face memory). The perceptual representation of the face can also be used for recognition of facial expression and gender discrimination.

Reprinted from Duchaine and Nakayama (2006). Reprinted with permission from Elsevier.

If you struggled with the complexities of the Bruce and Young (1986) model, help is at hand. Duchaine and Nakayama (2006) produced a simplified version including an additional face detection stage (see Figure 3.15). At this initial stage, observers decide whether the stimulus is a face. The importance of this stage can be seen with reference to a prosopagnosic called Edward with extremely poor face recognition. In spite of his problems with later stages of face recognition, he detected faces as rapidly as healthy individuals (Duchaine & Nakayama, 2006).

Findings

According to the model, there are various reasons why it is easier to recognise familiar faces than unfamiliar ones. However, what is especially important is that we possess much more structural information about familiar faces. This structural information (associated with face recognition units) relates to relatively unchanging aspects of faces and gradually accumulates with increasing familiarity with any given face. As we see, the differences between familiar and unfamiliar face recognition are perhaps even greater than assumed by Bruce and Young (1986).

IN THE REAL WORLD: RECOGNISING UNFAMILIAR FACES

Have a look at the 40 faces displayed in Figure 3.16. How many different individuals do you think are shown? Produce your answer before reading on.

Figure 3.16

Forty face photographs to be sorted into piles for each of the individuals shown in the photographs.

From Jenkins et al. (2011). Reproduced with permission from Elsevier.

In a study by Jenkins et al. (2011; discussed shortly) using a similar stimulus array, participants on average thought 7.5 different individuals were shown. The actual number for the array used by Jenkins et al. and for the one shown in Figure 3.16 is actually only two.

The two individuals (A and B) are arranged as shown below in the array:

A B A A A B A B A B A A A A A B B B A B B B B A A A B B A A B A B A A B B B B B

In the study by Jenkins et al. (2011) referred to above, British participants were presented with 40 face photographs. They sorted the photographs with a separate pile for each person shown in the photographs. The photographs consisted of 20 photographs each of two Dutch celebrities virtually unknown in Britain. On average, participants thought the number of different individuals represented was almost four times the actual number.

In another experiment, Jenkins et al. (2011) asked Dutch participants to carry out exactly the same task. For these participants, the faces were familiar. The findings were dramatically different from those of the previous experiment. Nearly all the participants performed perfectly: they sorted the photographs into two piles with 20 photographs of one celebrity in each pile.

In a third experiment, Jenkins et al. (2011) presented 400 photographs consisting of 20 photographs each of 20 unfamiliar individuals. Participants rated the attractiveness of each face. There was more variability in attractiveness ratings within than between individuals. In other words, there were often large differences in rated attractiveness of two photographs of the same person. As you may have noticed, beautiful celebrities often look surprisingly unattractive when photographed unexpectedly in everyday life.

What do the above findings mean? First, there is considerable within-person variability in facial images, which explains why different photographs of the same unfamiliar individual look as if they come from different individuals. Second, the findings also mean passport photographs have limited value. Third, the findings help to explain the difficulties that eyewitnesses have in identifying the person responsible for a crime (see Chapter 8). Fourth, we are much better at recognising that different photographs of a familiar individual are of the same person because we possess much relevant information about that individual.

What can be done to improve identification of unfamiliar faces? We can use image averaging across several photographs of the same individual (Jenkins & Burton, 2011). This reduces the impact of those aspects of the image varying across photographs of a given individual. As predicted, Jenkins and Burton found identification based on average images became increasingly accurate as the number of photographs contributing to those images

increased.

The second prediction is that different routes are involved in the processing of facial identity and facial expression. Haxby et al. (2000) agreed. They argued that the processing of changeable aspects of faces (especially expressions) occurs mainly in the superior temporal sulcus. There is some support for this prediction. Fox et al. (2011) found patients with damage to the face-recognition network had impaired identity perception but not expression perception. In contrast, a patient with damage to the superior temporal sulcus had impaired expression perception but reasonably intact identity perception.

However, the two routes are not entirely independent. Judgements of facial expression are strongly influenced by irrelevant identity information, which indicates a measure of interdependence of the two routes (Schweinberger & Soukup, 1998). In contrast, however, judgements of facial identity were not influenced by the emotion expressed. Fitousi and Wenger (2013) found that whether the two routes were independent depended on the precise task. There were two tasks: (A) respond positively if the face has a given identity and emotion (e.g., happy face belonging to Keira Knightley); (B) respond positively if the face has a given identity or a given emotion (e.g., happy face or Keira Knightley or both). There was evidence of independent processing of identity and expression with task (B) but not task (A).

The facial-expression route is more complex than envisaged by Bruce and Young (1986). Damage to certain brain regions can affect recognition of some emotions more than others. For example, damage to the amygdala produces greater deficits in recognising fear and anger than other emotions (Calder

& Young, 2005). Similar emotion-specific patterns were obtained when brain-damaged patients tried to recognise the same emotions from voices (Calder

& Young, 2005). Young and Bruce (2011) admitted they had not expected deficits in emotion recognition to be specific to certain emotions.

The third prediction is that we always retrieve personal information (e.g., occupation) about a person before recalling their name. Young et al. (1985) asked people to record the problems they experienced in face recognition. There were 1,008 such incidents, but people never reported putting a name to a face while knowing nothing else about that person. In contrast, there were 190 occasions on which someone remembered a reasonable amount of information about a person but not their name, which is also as predicted by the model. Finally, also as predicted, there were 233 occasions on which a face triggered a feeling of familiarity but an inability to think of any other relevant information about the person.

In spite of the above findings, the notion that names are always recalled after personal information is probably too rigid. Calderwood and Burton (2006) asked fans of the television series Friends to recall the name or occupation of the main characters when shown their faces. Names were recalled faster than occupations, suggesting names can sometimes be recalled before personal information. However, it is possible that other personal information (e.g., character in Friends) might have been recalled faster than name information.

Evaluation

Young and Bruce (2011) provided a very useful evaluation of their own model. On the positive side, the model adopted a broad perspective emphasising the wide range of information that can be extracted from faces. In addition, it was ahead of its time in identifying the major processes and structures involved in face pro cessing and recognition. Finally, Bruce and Young (1986) made an excellent attempt to indicate the main differences in the processing of familiar and unfamiliar faces.

The model has various limitations. First, the notion that facial expression is processed separately from facial identity is oversimplified and often incorrect. Second, expression analysis is much more complex than assumed in the model. Third, as Young and Bruce (2011) admitted, they were wrong to exclude gaze perception from their model. Gaze signals are very valuable to us in various ways, including providing useful information about what the other person is attended to. Fourth, the assumption that name information is always accessed after personal information about faces may be too rigid.

Individual differences

Bruce and Young (1986) focused on general factors involved in face recognition. However, much can be learned about face recognition by focusing on individual differences, as we have already seen in our discussion of prosopagnosics. Russell et al. (2009) focused on four “super-recognisers” who had exceptionally good face-recognition ability. They performed at a very high level on several tasks involving face recognition (e.g., identifying famous people from photographs taken many years before they became famous).

Russell et al. (2012) pointed out that face recognition depends more than object recognition on surface reflectance information (the way an object’s surface reflects and transmits light) but less on shape information. This suggests that super-recognisers might be especially proficient at using surface reflectance information. In fact, however, they were simply better than other people at using surface reflectance and shape information.

Genetic factors probably help to explain the existence of super-recognisers. Wilmer et al. (2010) studied face recognition in monozygotic or identical twins (sharing 100% of their genes) and dizygotic twins (sharing only 50%). The face-recognition performance of identical twins was much more similar than that of fraternal twins, indicating face-recognition ability is influenced in part by genetic factors.

VISUAL IMAGERY

Close your eyes and imagine the face of someone very important in your life. What did you experience? Many people claim forming visual images is like

“seeing with the mind’s eye”, suggesting there are important similarities between imagery and perception. Mental imagery is typically thought of as a form of experience, implying that it has strong links to consciousness. However, it also possible to regard imagery as a form of mental representation (an internal cognitive symbol representing some aspects of external reality) (e.g., Pylyshyn, 2002). We would not necessarily be consciously aware of images in the form of mental representations. In spite of its importance, the issue of whether imagery necessarily involves consciousness has attracted relatively little direct research interest (Thomas, 2009).

If (as is often assumed), visual imagery and perception are similar, why don’t we confuse them? In fact, some people suffer from hallucinations with what they believe to be visual perception occurring in the absence of the appropriate environmental stimulus. In Anton’s syndrome (“blindness denial”), blind people are unaware they are blind and may confuse imagery for actual perception. Goldenberg et al. (1995) described a patient nearly all of whose primary visual cortex had been destroyed. In spite of that, the patient generated visual images so vivid they were mistaken for genuine visual perception.

Bridge et al. (2012) studied a young man, SBR, who also had virtually no primary visual cortex but was not suffering from Anton’s syndrome. He had vivid visual imagery and his pattern of cortical activation when engaged in visual imagery was very similar to that of healthy controls.

Some other patients have Charles Bonnet syndrome, defined as “consistent or periodic complex visual hallucinations that occur in visually impaired individuals with intact cognitive ability” (Yacoub & Ferrucci, 2011, p. 421). One sufferer reported the following hallucination: “There’s heads of 17th century men and women, with nice heads of hair. Wigs, I should think. Very disapproving, all of them” (Santhouse et al., 2000, p. 2057). Note, however, that patients are generally (but not always) aware the hallucinations are not real so they are actually pseudo-hallucinations.

KEY TERMS

Anton’s syndrome

A condition found in some blind people in which they misinterpret their visual imagery as visual perception.

Charles Bonnet syndrome

A condition in which individuals with eye disease form vivid and detailed visual hallucinations sometimes mistaken for visual perception.

Patients with Charles Bonnet syndrome have increased activity in brain areas specialised for visual processing when hallucinating (ffytche et al., 1998).

In addition, hallucinations in colour were associated with increased activity in areas specialised for colour processing.

In visual perception, bottom-up processes inhibit activation in parts of the visual cortex (e.g., BA37). The impoverished bottom-up processes in Charles Bonnet syndrome permit spontaneous activation in areas associated with the production of hallucinations (Kazui et al., 2009).

Anyone (other than those with eye disease) suffering from visual hallucinations is unlikely to remain at liberty for long. How do we avoid confusing images and perceptions? One reason is that we generally know we are deliberately constructing images, which is not the case with perception. Another reason is that images contain much less detail than perception. Harvey (1986) found people rated their visual images of faces as similar to photographs from which the sharpness of the edges and borders had been removed.

Dalam dokumen Professor Trevor Harley (Halaman 110-114)