• Tidak ada hasil yang ditemukan

Temporal ventriloquism

Dalam dokumen Professor Trevor Harley (Halaman 172-176)

The coordination of attention across two or more modalities (e.g., vision and audition).

Suppose we present participants with two streams of lights (as was done by Eimer and Schröger (1998)), with one stream of lights being presented to the left and the other to the right. At the same time, we also present participants with two streams of sounds (one to each side). In one condition, participants detect deviant visual events (e.g., longer than usual stimuli) presented to one side only. In the other condition, participants detect deviant auditory events in only one stream.

Event-related potentials (ERPs; see Glossary) were recorded to obtain information about the allocation of attention. Unsurprisingly, Eimer and Schröger (1998) found ERPs to deviant stimuli in the relevant modality were greater to stimuli presented on the to-be-attended side than the to-be-ignored side. Thus, participants allocated attention as instructed. Of more interest is what happened to the allocation of attention in the irrelevant modality. Suppose participants detected visual targets on the left side. In that case, ERPs to deviant auditory stimuli were greater on the left side than the right. This is a cross-modal effect in which the voluntary or endogenous allocation of visual attention also affected the allocation of auditory attention. In similar fashion, when participants detected auditory targets on one side, ERPs to deviant visual stimuli on the same side were greater than ERPs to those on the opposite side.

Thus, the allocation of auditory attention influenced the allocation of visual attention as well.

Figure 5.14

An example of temporal ventriloquism in which the apparent time of onset of a flash is shifted towards that of a sound presented at a slightly different timing than the flash.

From Chen and Vroomen (2013). Reprinted with permission from Springer.

Chen and Vroomen (2013) review several studies providing evidence of temporal ventriloquism. A simple example is when the apparent onset of a flash is shifted towards an abrupt sound presented slightly asynchronously (see Figure 5.14). Other research has found that the apparent duration of visual stimuli can be distorted by asynchronous auditory stimuli.

IN THE REAL WORLD: USING WARNING SIGNALS TO PROMOTE SAFE DRIVING

Front-to-rear-end collisions cause 25% of road accidents, and driver inattention is the most common cause of such collisions (Spence, 2012). Thus, it is important to devise effective warning signals to enhance driver attention and reduce collisions. Warning signals (e.g., car horn) might be useful in alerting the driver to potential danger. Warning signals might be especially useful if they were informative because there was a relationship between the signal and the nature of the danger. However, it would probably be counterproductive if informative warning signals required time-consuming cognitive processing.

Ho and Spence (2005) studied the effects of an auditory warning signal (car horn) on drivers’ reaction times when braking to avoid a car in front or accelerating to avoid a speeding car behind. The auditory signals were presented to the front or rear of the driver. In one experiment, the sound came from the same direction as the critical visual event on 80% of trials. In another experiment, the direction of the sound did not predict the direction of the critical visual event (i.e., it was from the same direction on only 50% of trials).

What did Ho and Spence (2005) find? First, reaction times were faster in both experiments when the sound and critical visual event were from the same direction (i.e., the sound cues were valid).

Second, these beneficial effects were greater when sound and visual event came from the same direction on 80% rather than 50% of trials.

What do the above findings mean? First, auditory stimuli influence visual attention. Second, the finding that the auditory signal influenced visual attention even when non-predictive probably depended on exogenous spatial attention (“automatic” allocation of attention). Third, the finding that the beneficial effects of auditory signals were greater when they were predictive than when they were non-predictive suggests the additional involvement of endogenous spatial attention (controlled by the individual’s intentions). Thus, the informativeness of the warning signal is important.

KEY TERMS

Exogenous spatial attention

Attention to a given spatial location determined by “automatic” processes.

Endogenous spatial attention

Attention to a stimulus controlled by intentions or goal-directed mechanisms.

Subtle effects of warning informativeness were studied by Gray (2011). Drivers had to brake to avoid a collision with the car in front. Auditory warning signals increased in intensity as the time to collision reduced and the rate of increase varied across conditions. In another condition, a car horn sounded. All the auditory signals speeded up brake reaction time compared to a no-warning control condition. The most effective condition was the one in which the rate of increase in the auditory signal was the fastest because this was the condition implying that the time to collision was the least.

Vibrotactile signals produce the perception of vibration through touch. Gray et al. (2014) studied the effects of such signals on speed of braking to avoid a collision. Signals were presented at three different sites on the abdomen arranged vertically. In the most effective condition, successive signals moved towards the driver’s head at an increasing rate that reflected the speed he/she was approaching the car in front. Braking time was 250 ms less in this condition than in a no-warning control condition. This condition was effective because it was highly informative.

In sum, research in cross-modal attention shows great promise for reducing road accidents. One limitation is that warning signals occur much more frequently in the laboratory than in real driving.

Another limitation is that it is sometimes unclear why some warning signals are more effective than others. For example, Gray et al. (2014) found that upwards moving vibrotactile stimulation was more effective than the same stimulation moving downwards.

Overall evaluation

What are the limitations of research on cross-modal effects? First, our theoretical understanding has lagged behind the accumulation of empirical findings as we saw in the discussion of the effects of warning signals on driver performance. Second, much of the research has involved complex, artificial tasks and it would be useful to investigate cross-modal effects in more naturalistic conditions. Third, individual differences have been ignored in most research.

However, there is accumulating evidence that individual differences (e.g., in preference for auditory or visual stimuli) influences cross-modal effects (see van Atteveldt et al. (2014) for a review).

DIVIDED ATTENTION: DUAL-TASK PERFORMANCE

Your life is probably becoming busier and busier. In our hectic 24/7 lives, people increasingly try to do two things at once (multitasking). For example, you may send text messages to friends while watching television or walking down the street. Ophir et al. (2009) used a questionnaire (the Media Multitasking Index) to identify individuals who engage in high and low levels of multitasking. They argued that there are disadvantages associated with being a high multitasker. More specifically, they found high multitaskers were more susceptible to distraction than low multitaskers.

Ophir et al. (2009) concluded that those attending to several media simultaneously develop “breadth-based cognitive control”, meaning they are not selective or discriminating in their allocation of attention. In contrast, low multitaskers are more likely to have top-down attentional control. These conclu sions were supported by Cain and Mitroff (2011). Only low multitaskers made effective use of top-down instructions to reduce distraction and enhance performance.

In some ways, Ophir et al.’s (2009) findings are surprising. We might expect that prolonged practice at multitasking would have various beneficial effects on attentional processes. For example, we might expect that high multitaskers would be better able than low multitaskers to split attention between two non-adjacent visual locations (split attention is discussed earlier in the chapter) at the same time. Evidence supporting that expectation was reported by Yap and Lim (2013).

Alzahabi and Becker (2013) investigated task switching. A digit and a letter were presented on each trial and a task (digit odd or even?, letter vowel or consonant?) had to be performed on one of the stimuli. On 50% of trials, the type of stimulus to be classified switched from the previous trial, whereas it remained the same on the other (repeat) trials. The key finding was that the high multitaskers showed more efficient task switching than low multitaskers (see Figure 5.15). Thus, high multitasking is associated with beneficial effects on some aspects of attentional control.

Care needs to be taken in interpreting the above findings because all that has been found is an association between multitasking and measures of attention. That means we do not know whether high levels of multitasking influence attentional processing or whether individuals with certain patterns of attention choose to engage in extensive multitasking.

What determines how well we can perform two tasks at the same time? The degree of similarity of the two tasks is one important factor. Two tasks can be similar in stimulus modality. Treisman and Davies (1973) found two monitoring tasks interfered with each other much more when the stimuli on both tasks were in the same modality (visual or auditory).

Two tasks can also be similar in response modality. McLeod (1977) showed the importance of this factor. His participants performed a continuous tracking task with manual responding together with a tone-identification task. Some participants responded vocally to the tones whereas others responded with the hand not involved in tracking. Tracking performance was worse with high response similarity (manual responses on both tasks) than with low response similarity.

Probably the most important factor in determining how well two tasks can be performed together is practice. We all know the saying, “Practice makes perfect”, and evidence apparently supporting it was reported by Spelke et al. (1976). Two students (Diane and John) received five hours’ training a week for four months on various tasks. Their first task was to read short stories for comprehension while writing down words to dictation, which they initially found very hard. After six weeks of training, however, they could read as rapidly and with as much comprehension when taking dictation as when only reading. After further training, Diane and John learned to write down the names of the categories to which the dictated words belonged while maintaining normal reading speed and comprehension.

Figure 5.15

(a) Relationship between amount of multitasking (measured by the Media Multitasking Index) and switch cost in ms (switch reaction time – repeat reaction time). (b) Mean reaction times for low and high multitaskers on repeat and switch trials.

From Alazahabi and Becker (2013). © American Psychological Association.

IN THE REAL WORLD: CAN WE THINK AND DRIVE?

Driving a car is the riskiest activity engaged in by tens of millions of adults. More than 40 countries have passed laws restricting the use of handheld mobile or cell phones by drivers to increase car safety. Are such restrictions really necessary? Strayer et al. (2011) reviewed the evidence. The likelihood of drivers being involved in a car accident is four times greater when using a mobile phone (whether handheld or hands-free). Overall, 28% of car crashes in the United States are caused by drivers using mobile phones.

Numerous studies have considered the effects of using mobile phones on simulated driving tasks. Caird et al. (2008) reviewed the findings from 33 studies. Reaction times to events (e.g., onset of brake lights on the car in front) increased by 250 ms compared to no-phone control conditions. The figure was similar whether drivers used handheld or hands-free phones and was larger when they were talking rather than listening. Caird et al. found drivers had very limited awareness of the negative impact of using mobile phones – they did not slow down or keep a greater distance behind the car in front.

The 250 ms slowing reported by Caird et al. (2008) may sound trivial. However, it translates into travelling an extra 18 feet (5.5 m) before stopping for a motorist doing 50 mph (80 kph). This could be the difference between stopping just short of a child in the road or killing that child.

It could be argued that laboratory findings do not apply to real-life driving situations. However, Strayer et al. (2011) discussed a study in which drivers in naturalistic conditions were observed to see whether they obeyed a law requiring them to stop at a road junction. Of drivers not using a mobile phone, 21% failed to stop completely compared to 75% of mobile-phone users.

Theoretical considerations

Why does using a mobile phone impair driving ability? One possibility is that the two activities both require some of the same specific processes. Bergen et al. (2013) asked drivers performing a simulated driving task to decide whether statements were true or false. Some statements had motor (e.g., “To use scissors, you have to use both hands”) or visual (e.g., “A camel has fur on the top of his

humps”) content, whereas others were more abstract (e.g., “There are 12 wonders of the ancient world”). Only motor and visual statements interfered with driving performance as assessed by the distance from the vehicle in front. These findings suggest that language involving specific processes (e.g., visual or motor) in common with driving can have a disruptive effect.

In spite of the above findings, most theorists have emphasised that the adverse effects of mobile-phone use on driving depend primarily on rather general attentional and other cognitive processes.

Strayer et al. (2011) identified two relevant attentional processes.

First, driving can cause inattentional blindness, in which an unexpected object is not perceived (see Chapter 4). In a study by Strayer and Drews (2007), 30 objects (e.g., pedestrians, advertising hoardings) were clearly in view as participants performed a simulated driving task. This was followed by an unexpected test of recognition memory for the objects. Those who had used a mobile phone on the driving task recognised far fewer of the objects they had fixated than did those who had not used a phone (under 25% vs. 50%, respectively).

Strayer and Drews (2007) obtained stronger evidence that using mobile phones impairs attentional processes in another experiment. Participants responded as rapidly as possible to the onset of the brake lights on the car in front and event-related potentials (ERPs; see Glossary) were recorded. The magnitude of the P300 (a positive wave associated with attention) was reduced by 50% in mobile-phone users.

Second, Strayer et al. (2011) discussed an unpublished study in which drivers’ eye movements on a simulated driving task were recorded. Drivers using hands-free phones were more likely than non-phone users to focus almost exclusively on the road ahead and so were less likely to see peripheral objects. This reduced attentional flexibility of non-phone users can be very dangerous if, for example, a young child is by the side of the road.

Spelke et al. (1976) found practice can dramatically improve people’s ability to perform two tasks together. However, their findings are hard to interpret for various reasons. First, they focused on accuracy measures, which can be less sensitive to dual-task interference than speed measures. Second, the reading task gave Diane and John flexibility in terms of when they attended to the reading matter, and so they may have alternated attention between tasks. More controlled research on the effects of practice on dual-task performance is discussed later.

When people perform two tasks during the same time period, they might do so by using serial or parallel processing. Serial processing involves switching attention backwards and forwards between the two tasks with only one task being attended to and processed at any given moment. In contrast, parallel processing involves attending to (and processing) both tasks at the same time.

There has been much theoretical controversy on the issue of serial vs. parallel processing. What has been insufficiently emphasised is that processing is often relatively flexible. Lehle et al. (2009) trained people to engage in serial or parallel processing when performing two tasks together. Those using serial processing performed better but found the tasks more effortful. Serial processing was effortful because it required inhibiting processing of one task while the other task is performed.

Lehle and Hübner (2009) also instructed people to perform two tasks together in a serial or parallel fashion, and found they obeyed instructions. Those instructed to use parallel processing performed much worse than those using serial processing. However, most participants receiving no specific instructions tended to favour parallel processing.

Lehle and Hübner used simple tasks (both involved deciding whether digits were odd or even). Han and Marois (2013) used two tasks, one of which (pressing different keys to each of eight different sounds) was much harder than Lehle and Hübner’s tasks. The findings were very different. Even when parallel processing was more efficient and was encouraged by financial rewards, participants engaged in serial processing. The difference in findings between the two studies probably reflects the problems in using parallel processing with difficult tasks. There is a more detailed discussion of the role of parallel and serial processing in dual-task performance later in the chapter.

Dalam dokumen Professor Trevor Harley (Halaman 172-176)