Rather, like readers of printed symbols or listeners to the spoken word, they first discern the overall thrust of the message—its general gestalt. Then they select a limited number of suitable details that capture the particular thrust of the message in light of their stored knowledge. As Salvaggio (1980) put it, “. . . the spectator’s attention in any given scene is not necessarily focused on what is visually signified, but is concerned with placing the signified into a gestalt—the entire area suggested by what is seen” (p. 41).
Detecting the gestalt for coding usually depends heavily on a television story’s verbal lead-in or on pictures clearly identifying a specific person, location, or situ- ation. These cues tell coders the overall meaning and significance of the message elements that they are about to witness. For instance, a group of children receiving food packets could be verbally identified as participants in a school lunch program or orphans receiving food in a refugee camp. Coders, just like ordinary viewers, can then structure the audiovisuals into a meaningful story within the gestalt provided for them by the messages in question and the flow of prior messages.
STUDYING CHANNELS THROUGH
they may miss important nonroutine relationships that may constitute exception- ally valuable new information links precisely because they tap unfamiliar sources.
Network studies may focus on the channels of communications within large po- litical units like a state, or on a single public agency, or on a number of groups that have created a network of lobby organizations to promote passage of a specific law. Alternatively, researchers can focus on the personal networks of particular individuals, noting the people with whom they interact most often. Network ana- lysts interested in interactions within small groups often concentrate on cliques of 5 to 25 people. Because many of the most important interactions in organizations, such as planning and decision making, generally take place in small groups, it is not surprising that specialized tools, such as Robert Bales’ (1950)Interaction Process Analysis, have been developed to analyze the interplay of messages exchanged among group members. The Bales technique classifies verbal interactions into 12 distinct categories and predicts success rates of dealings within small groups based on these data (Bales, 1950).
Deciding at which level of analysis the research should be focused is a major design conundrum that network analysts must face because there are no clear- cut limits to social networks (Baybeck & Huckfeldt, 2000; Knoke, 1990; Knoke &
Kuklinski, 1987). Analysts may base this decision on the perceptions of the network members or on the purposes of the particular investigation. For example, in a study of communication breakdowns in the U.S. State Department during a crisis, the analysis could be limited to networks of civil servants at a particular employment grade. If investigators use a snowball-sampling approach to network analysis, in which they trace with whom a particular respondent communicates and then trace the contacts of these contacts, they must decide at what level to stop their inquiry.
Depending, on available resources, a wider focus is better because an overly narrow focus may capture only truncated network structures. On the other hand, if the focus is too wide, it may be impossible to collect and process all of the data. For example, a network that includes 5,000 people encompasses 25 million linkage possibilities. Despite great strides in computing capacity, handling such massive data sets effectively remains difficult (Scott, 2000).
Data about network structures are usually derived from questionnaires, inter- views, diaries, focus groups, and on-site observations of the routes that messages take in linking message senders and receivers. These data are normally gathered from each member of an organization, not merely from a random sample. Re- searchers ascertain how often network members communicate with specific in- dividuals and what type of information is conveyed. It may be useful to distinguish between work-related and social messages and among transmissions of facts, judg- ments, and opinions. Researchers may also inquire about who ordinarily initiates the information exchange. To make recall easier, investigators may give respon- dents name lists of each respondent’s likely contact partners within the network.
In place of self-reports by network members, which some investigators distrust as potentially unreliable, networking can be discovered through direct observa- tions or through scanning records of past communications. Archival data may have the added advantage of covering long time periods. Researchers may be able to use them for insights into networking in diverse settings and in diverse external circum- stances, like periods of economic or political growth or decline.
After the data about links in the chains of communications have been collected, computer programs can identify formal and informal networks. A variety of anal- yses is possible, including sociometric analysis, spatial analysis, matrix analysis,
factor analysis, block-modeling techniques, multidimensional scaling techniques, and cluster analysis (Scott, 2000). Conventional statistical methods can be used to describe the properties of various aspects of the network structures. However, since network data violate the random-sampling assumptions that underlie sta- tistical inference, conventional statistical analyses may be problematic (Knoke, 1990).
Network analysis can reveal network characteristics that are invaluable for trac- ing relationships of power and influence within political organizations and systems.
For example, it can show how centrally located various individuals, groups, or orga- nizations are within a particular communication network and who the gatekeepers are who control access to various network members. It can also reveal the strength, frequency and speed of interactions. Network analysis can measure network cohe- sion to determine the proportion of network relations that are reciprocated. It can measure what kinds of information circulate and who is included or excluded from the flow of communication. Tracing these patterns permits analysts to determine how well coordinated a communication network is overall and within its subordi- nate units and how adequately various communication roles are handled. Of course, the standards used to judge adequacy depend on the perspectives from which or- ganizations are viewed. Since networks involve multiple constituencies, multiple perspectives may have to be considered (Provan & Milward, 1995). Supervisors may consider the channels in an employee complaint system adequate, whereas employees may deem them woefully inefficient.
In recent years, researchers have moved beyond merely tracing the channels through which messages travel and their general thrust and have begun to focus more closely on their content as indicators of linkages. James Danowski (1991) has pioneered a technique calledWord Network Analysisthat uses computerized con- tent analysis to detect shared words and concepts among message senders. This approach can provide fresh insights into the ideas circulating within an organiza- tion and the manner in which organizations develop common concepts and reach consensus.
EXPERIMENTS AND SIMULATIONS
One of the surprises in our scrutiny of recently published political communication studies was the high incidence of experimental research. It constituted 16% in our sample. Considering that the use of experiments in political communication studies started in earnest only in the 1980s, that is an unexpectedly high share.
The principal advantage of experimental studies is the researcher’s ability to control the stimuli to which research subjects are exposed. Conducting research under rigorously monitored laboratory or field conditions avoids the stimulus adul- teration that occurs in natural settings where multiple stimuli are present and in- teract with the messages whose impact the researcher wants to test. Accordingly, compared to nonexperimental methods, experiments permit researchers to draw much more reliable causal inferences—linking stimuli to particular effects—even when experimental settings are quite unnatural. In laboratory studies of the effects of election campaign commercials, for example, researchers can expose subjects to a particular advertisement and know that the subjects have actually seen it as well as knowing the precise content of the stimulus commercial. As part of the experi- ment, researchers can vary details in the ad to isolate which of its features produced
the detected impact. Such precision and such manipulations are unattainable when survey research is the method of choice.
The chief problem posed by experimental research is the fact that the settings in which most experiments are conducted are artificial. Message impact may be quite different in laboratory settings where subjects are exposed to a single stimulus at a time than in natural situations that abound in multiple interacting stimuli. For instance, a finding that a report about a president’s marital infidelity reduced his appeal to voters by 30% may have little validity in the real world. Competing news about the president—purposely omitted from the laboratory tests—may wipe out the impact of the infidelity story. The fact that college undergraduates are often used as experimental subjects also impairs the validity of experimental studies. College students are unique populations, quite unlike average adult Americans. Therefore, the findings yielded by college student samples cannot be generalized to larger populations without further testing. However, fairness requires mentioning that the methods that are widely accepted as representative, such as well-conducted survey research, also have chinks in their methodology armor. Problems with sampling, question wording, and flawed memories that were discussed earlier are examples.
Experimental studies serve many important functions in political communica- tion research. Scholars are using them increasingly as a more finely honed tool to probe media impact. Mostly such studies have involved exposing small groups of people to carefully selected information stimuli, often in multiple versions. Investi- gators have then measured their subjects’ retention of the information or assessed attitudes and opinions, or changes in them, generated by the messages (Iyengar, 2001; Leshner, 2001; McDevitt & Chaffee, 2000; Neuman et al., 1992). Some of these tests have involved measuring physiologic reactions like heart rates and skin con- ductance and even blood flow to brain cells (Grabe, Lang, Zhou, & Bolls, 2000).
Such research has helped substantially in discovering message and context factors that aid or deter learning, including assessing impact of priming and framing phe- nomena whereby prior messages or associations suggested by messages become potent interpretation contexts (Scheufele, Shanahan, & Lee, 2001).
Experiments have also been used to compare the qualities of various research tools (Wright et al., 1998). For instance, researchers have experimented with diverse rating schemes in surveys to ascertain which produces the highest response rates or the finest discriminations by respondents. Question wording, which is always a very big challenge, has benefited from numerous experiments that determined which of several versions worked best.
Internet experiments are the newest additions to the experimental research toolkit. Shanto Iyengar (2001), who has engaged in such experiments, contends that
traditional experimental methods can be rigorously and far more efficiently replicated using on-line strategies. . . . researchers have the ability to reach diverse populations without geographic limitations. The rapid development of multimedia-friendly Web browsers makes it possible to bring text or au- diovisual presentations to the computer screen. Indeed, the technology is so accessible that subjects can easily ‘self-administer’ experimental manipula- tions. (p. 227).
Iyengar’s research shows that demographically sound samples can be easily and cheaply recruited on the Web, provided that adjustments are made for the lingering
digital divide (Iyengar, Hahn, & Prior, 2001). Administering the experiment on the Web saves costs related to site rentals and travel expenses. It vastly expands the pool from which subjects can be recruited.
Simulations are another method used to test interactive communication behav- iors under controlled contingencies. For example, one may want to simulate a nego- tiation session between warring factions to test which approach might produce the most satisfactory settlement of the dispute. Many simulations are now performed as computerized exercises. Government agencies and think tanks are heavy users of these technologies.
If successful, experimental studies and small-group intensive research often serve as pilots that pretest hypotheses for studies done on a larger scale. Findings from experimental research and depth interviews may also fill gaps and broaden the insights gained from larger surveys. For example, extensive probing of the thinking behind a respondent’s answers to survey questions, which is possible in small-scale research, can help in interpreting survey responses. A combina- tion of these complementary research approaches, though expensive, is therefore ideal.