• Tidak ada hasil yang ditemukan

Survey of Language Activation Protocols

This section describes some of the experimental designs that have been used in language activation studies and the results that can be expected. It may be helpful to identify from the outset a few myths about language activation studies that are, in the author’s view, somewhat prevalent.

Myth 1

Language-related brain activation is difficult to detect and not as robust as motor and primary sensory activation.

In fact, activation magnitude is primarily determined by the type of contrast being performed, that is, how similar or different are the conditions being contrasted. Very robust signals in heteromodal cognitive areas can be readily detected, even in individual brains, given an appropriate task contrast.

Myth 2

The pattern of language-related brain activation observed by fMRI depends mainly on the type of language task employed.

In fact, the control task is equally important in determining the pattern of activation. Extremely different patterns can result from the same language task

when contrasted with different control or baseline conditions. Conversely, very similar patterns can result from very different language tasks when these are contrasted with different control conditions.

Myth 3

An effective language mapping protocol should detect all critical language areas.

It is very unlikely that any single protocol could detect all critical language areas. This is because language is not a single homogeneous process, but rather the product of many interacting neural systems that are engaged to varying degrees depending on task requirements.

Myth 4

The main language zones in the brain are Broca’s Area (left posterior inferior frontal gyrus) and Wernicke’s Area (left posterior superior temporal gyrus).

In fact, these brain areas appear to have very specific rather than general language functions. As such, they are engaged during some language tasks but not others, and their damage results in specific deficits that are often relatively minor. Both traditional lesion studies and language imaging stud- ies have identified a host of other, larger, more general language zones in the prefrontal, lateral temporal, ventral temporal, and posterior parietal cortex of the dominant hemisphere (see References59, 93, 94 for reviews). The approxi- mate location of some of these zones is shown in Figure 8.1.

The variety of possible stimuli and tasks that could be used to induce language processing is vast, and a coherent, concise discussion is difficult.

Table 8.1 lists some of the broad categories of stimuli that have been used and

Figure 8.1. Schematic drawing of some putative language areas in the dominant hemisphere. Yellow = phoneme and auditory word form perception area. Red = semantic storage and retrieval systems. Blue = phonological access and phonological output systems. Green = general verbal retrieval, selection, and working memory functions.

some of the brain systems they tend to engage. Auditory nonspeech refers to noises or tones that are not perceived as speech. Such stimuli can be variably complex in their temporal or spectral features and possess to varying degrees the acoustic properties of speech (see References 95–97). They activate early (primary and association) auditory cortex to varying degrees depending on their precise acoustic characteristics. Auditory phonemes are speech sounds that do not comprise words in the listener’s language; these may be simple consonant–vowel monosyllables or longer sequences. In addition to early auditory cortex, speech phonemes activate auditory wordform systems that are relatively specialized for the perception of speech sounds, whether pre- sented in the form of nonwords, words, or sentences.95–98

Visual nonletter here refers to any visual stimulus not recognized by the subject. Examples include characters from unfamiliar alphabets, nonsense signs, and false font. Such stimuli can be variably complex and possess to varying degrees the visual properties of familiar letters. They activate early (primary and association) visual cortex depending, to varying degrees, on their visual characteristics. Visual letterstrings are random strings of let- ters that do not form familiar or easily pronounceable letter combinations (e.g., FCJVB). Visual pseudowords are letterstrings that are not words, but possess the orthographic and phonological characteristics of real words (e.g., SNADE). Letterstrings are claimed to activate a visual wordform area located in the left mid-fusiform gyrus; this area responds more strongly to pseudowords and words than to random letterstrings.99

The degree to which these stimuli engage the processes listed in Table 8.1 may depend partly on the task that the subject is asked to perform, although the processes in Table 8.1 seem to be activated relatively auto- matically, even when subjects are given no explicit task. This is less true for the processing systems listed in Table 8.2, which seem to be strongly task- dependent. The semantic system appears to be partly active even during rest or when stimuli are presented passively to the subject.87–91 Other tasks seem to suppress semantic processing by requiring a focusing of attention on perceptual, orthographic, or phonological properties of stimuli. Examples include Sensory Discrimination tasks (e.g., intensity, size, color, frequency, or more complex feature-based discriminations), Phonetic Decision tasks in which the subject must detect a target phoneme or phonemes, Phonological

Table 8.1. Effects of Stimuli on Sensory and Linguistic Processing Systems

Stimuli Early

sensory Auditory

wordform Visual

wordform Object

recognition syntax

Auditory Nonspeech Aud

Auditory Phonemes Aud +

Auditory Words Aud +

Auditory Sentences Aud + +

Visual Nonletters Vis

Visual Letterstrings Vis +/–

Visual Pseudowords Vis +

Visual Words Vis +

Visual Sentences Vis + +

Visual Objects Vis +

Decision tasks requiring a decision based on the phonological structure of a stimulus (e.g., detection of rhymes, judgment of syllable number), and Orthographic Decision tasks requiring a decision based on the letters in the stimulus (e.g., case matching, letter identification). Other tasks, such as reading and repeating, make no overt demands on semantic systems, but probably elicit automatic semantic processing. The extent to which this occurs may depend on how meaningful the stimulus is: sentences likely elicit more semantic processing than isolated words, which in turn elicit more than pseudowords. Finally, many tasks make overt demands on retrieval and use of semantic knowledge. These include Semantic Decision tasks requir- ing a decision based on the meaning of the stimulus (e.g., “Is it living or non-living?”), Word Generation tasks requiring retrieval of a word or series of words related in meaning to a cue word, and Naming tasks requiring retrieval of a verbal label for an object or object description.

Output Phonology refers to the processes engaged in retrieving a phono- logical (sound based) representation of a word. These processes are required for both overt and covert reading, repeating, naming, and word genera- tion. In addition, any task that engages reading, such as an orthographic or semantic decision on printed words or pseudowords, will automatically engage output phonological processes to some degree.60,67,73 In contrast, Speech Articulation processes are engaged fully only when an overt spoken response is produced.100 Verbal Working Memory is required whenever a written or spoken stimulus must be held in memory. This applies to repeti- tion tasks if the stimulus to be repeated is relatively long, to auditory deci- sion tasks if the auditory stimulus must be remembered while the decision is being made, and to most word-generation tasks, because the cue must be maintained in memory while the response is retrieved. Finally, seman- tic decision, word-generation, and naming tasks make strong demands on frontal mechanisms involved in searching for and retrieving information associated with a stimulus.

Table 8.2. Effects of Task States on Some Linguistic Processing Systems

Tasks Semantics Output

phonology Speech

articulation Working

memory Other

language

Rest or Passive +

Sensory Discrimination +/–

Read or Repeat Covert + + +/–

Read or Repeat Overt + + + +/–

Phonetic Decision + +

Phonological Decision + +

Orthographic Decision +/–

Semantic Decision + +/– + Semantic search

Word Generation Covert + + + Lexical search

Word Generation Overt + + + + Lexical search

Naming Covert + + Lexical search

Naming Overt + + + Lexical search

With these somewhat over-simplified stimulus and task characterizations, it is possible to make some general predictions about the processing systems in which the level of activation will differ when two task conditions are con- trasted, and thus the likely pattern of brain activation that will be observed in a subtraction analysis. Some commonly encountered examples are listed below and in Table 8.3.