• Tidak ada hasil yang ditemukan

CONTENT ANALYSIS TECHNIQUES

When research focuses on political messages, some form of content analysis is in order. It can be applied to all types of message content, including written or printed documents, recorded messages, films, and audio tapes. Messages may even be analyzed instantaneously if observers are present when they are uttered initially.

Using trained human coders to identify textual elements is still the most com- mon content analysis approach. When voluminous data sets need to be analyzed,

“manual” content analysis is a very time-consuming, tedious, and costly technique.

Computer content analysis has become a good alternative, although preparation of data for computer analysis may also be costly and time-consuming, especially when the data are not initially in machine-readable form.

Refinements in computer coding constitute the most important methodologi- cal developments in content analysis research. They include the development of numerous software packages for various kinds of analyses of computer-readable texts. Some analysis programs provide researchers with dictionaries that have been pretested for their usefulness in detecting specified textual content—the ideologi- cal orientation of a message, for example, or the cognitive sophistication of political speakers. Researchers can also construct their own search dictionaries with the aid of these programs.

Tasks handled by computers range from simple word frequency counts to com- parisons of entire texts to identify their similarities and differences. Many programs can detect textual themes as well as specific words and strings of words. They can also identify the context in which selected words and phrases are embedded. With such “key word in context” (KWIC) programs, investigators can choose to identify key words in a specified context rather than identifying all key words regardless of context. Key words, as well as all contextual words that appear at a specified distance before and after a key word, can be printed out.

Content analysis can be either quantitative or qualitative. Often, it is a mixture of both designed to take advantage of the strengths of each approach. Qualitative analysis of the content of reports, interviews, focus group transcripts, and other messages is usually based on techniques for interpreting messages that humans learn from interacting with others throughout their lives. This type of qualitative analysis, if done systematically based on well-defined criteria, can be very useful and accurate. It has the advantage of allowing researchers to employ many of the intuitive skills for message interpretation that humans possess. These skills in- clude understanding the connotations that are attached to messages, sensing their emotional impact, and spinning out widely believed implications and their conse- quences. Although such judgments are inevitably colored by individual attitudes and opinions, they also reflect the collective memories of the cultural communities in which political messages circulate.

Quantitativeanalysis, as distinguished fromqualitativeanalysis, involves estab- lishing readily measurable, minimally judgmental, criteria for defining the message elements to be detected and the indicators that signal the presence or absence of these elements. Selection criteria are then used for systematic examination of the chosen content. For example, a study of how often Commerce Department employ- ees refer to congressional rules might be based on scrutiny of all annual reports of the department. Every reference to congressional rule making might be noted and scored on a scale rating the length of the reference.

Systematic, quantitative recording of data guards against errors that may arise from more casual procedures that often fail to stick to a chosen protocol. Quantita- tive methods, using computers, make it possible to subject large databases to rig- orous analyses. That task would be beyond the capacity of painstaking qualitative analysis. Rigorous quantitative procedures also make it feasible to apply complex mathematical tests to the findings. For example, factor analysis may reveal clusters of concepts that would escape the intuitive analyst. Multiple regression analysis may permit predictions about changes in communication variables that can be expected when the communication situation changes. Mathematical models can facilitate sophisticated analyses that may aid in restructuring ineffective commu- nication networks. The price to be paid for increased capacity to handle large data sets rigorously, however, may be an overly mechanical analysis that distorts the meanings that the content is likely to convey to live audiences (Kaufman, Dykers,

& Caldwell, 1994; Popping, 2000; Rosenberg, Schnurr, & Oxman, 1990).

Just as qualitative analysis has quantitative aspects because investigators count the presence or absence of specified content characteristics, the reverse is also true. For example, deciding whether certain criteria are present or absent often in- volves subjective considerations. For instance, an automated search of newspaper editorials during an international crisis may still require a detailed examination by human coders to check that each mention of the wordcrisisdoes, indeed, signify attention to the international event that is under scrutiny.

If researchers want to evaluate messages, subjective judgments may be unavoid- able. If they use a scale assessing urgency, for example, it may require human judgments to determine at what point “great” urgency becomes “somewhat great”

or “neutral.” Such distinctions are still difficult to incorporate into computer pro- grams despite great progress in artificial intelligence studies. However, some soft- ware packages, like newer versions of the venerable General Inquirer system, do incorporate artificial intelligence systems that make some of the finer distinctions.

All types of content analysis, irrespective of methodological advances, entail a series of important steps, beginning with selection of the body of data to be ex- amined to shed light on the research hypotheses. When researchers use bodies of data from archives, like those mentioned earlier, they must be sensitive to the archive’s data collection methods. For example, the Vanderbilt Television News Archive collects the early evening national television news broadcast by major net- works continuously, but it samples other types of political messages, such as pres- idential addresses. Other archives, like the local broadcasts collected by Chicago’s Museum of Broadcast Communications, rotate through the news broadcasts aired by major local stations throughout each week.

After the text to be scrutinized has been chosen, the unit of analysis must be determined. That means that investigators must decide how large or small a portion of the research material they want to code for the desired information. For example, if the research purpose is to look for the frequency of mention of global warming

in theDenver Post, and the unit of analysis is the front page of the paper, then a single observation would be recorded for each front page mention. If, however, the unit of analysis is every paragraph on the front page and global warming is mentioned in seven paragraphs, there would be seven recordable observations.

Generally, the smaller the unit of analysis, the larger will be the total number of recorded observations for a particular sample of content. Qualitative analysts tend to use larger units compared to quantitative analysts, who normally prefer smaller ones.

After the unit of analysis has been chosen, codes and indexes must be devel- oped to guide investigators in detecting the chosen content elements. Preparing a codebook that describes in detail how the research must be executed is a crucial aspect of content analysis because the ultimate value of most studies hinges on the insight and skill with which variables that are important for the investigation have been identified and defined. Variables must be mutually exclusive so that coders can distinguish them from each other.

It is often difficult to identify all of the elements of content that should be recorded. For example, a researcher may want to investigate how frequently race identification is included in an agency’s personnel files. Should mention of the client’s address be considered racial identification when it refers to a section of town that is known to be overwhelmingly populated by a single race? Should iden- tification of the client as a member of a racially oriented organization be deemed racial identification? Such questions illustrate the delicacy of making coding deci- sions and the significance of choice alternatives.

The effectiveness of codebooks and the success of coders in following the di- rections accurately can be checked through various reliability tests that assess intercoder reliability—the extent of agreement among coders about code choices.

Checks entail calculating the percentage of coder agreement based on the total numbers of coding decision. Some of the widely used formulas, such as Scott’spi, adjust the calculations to reflect the amount of coder agreement likely to occur by chance (Holsti, 1969; Neuendorf, 2001).

Although perfect agreement on coding decisions is rare because it is difficult to specify all contingencies and subtleties, one would normally expect coders to agree on about 80% or more of their decisions. Intracoder reliability refers to the ability of coders to replicate their own decisions after a period of time. For well-trained coders, it should be even higher than intercoder agreement. When agreements among skilled coders drop below acceptable levels, categories may have to be redefined or even redesigned.

Most researchers customize their codebooks to best suit their individual re- search purposes. This has obvious advantages but also makes comparisons of var- ious studies far more difficult. Despite valiant efforts, scholars have been unable to agree on uniform coding categories for comparable studies. For a while, there was hope that wide use of the coding dictionaries in computer coding programs might lead to greater uniformity of coding and hence greater comparability of studies.

But programs using diverse approaches have mushroomed so that wide diversity of coding schemes still reigns.

The goals of content analysis are also very diverse and research methods will vary accordingly (Riffe, Lacy, & Fico, 1998). For example, scholars interested in political symbolism may use dramaturgical analysis and fantasy theme analysis to explore the dramatic and fantasy themes in political life. Some scholars use the prin- ciples of hermeneutics to study the verbal construction of social meanings. Others

use ethnomethodology, which is the study of explanations people give about their daily experiences, and symbolic interactionism, which assesses how people use symbols to communicate with one another. Nimmo and Combs (1990), for example, have used a dramatist perspective to examine election communication. They con- ceptualize elections as dramatic rituals played for audiences of prospective voters to show the candidates locked in a heroic struggle. Nimmo and Combs contend that audiences perceive these rhetorical visions as reality and gear their expectations of future performance to these visions.

Researchers frequently examine messages as clues to underlying political, so- cial, and economic conditions, such as international tensions, confidence in gov- ernment, and fear about economic declines. In the 1950s and 1960s, for example, a group of prominent social scientists collaborated in the Hoover Institution’s RADIR studies. The acronym stands for Revolution And Development of International Re- lations. The researchers believed that they could infer the knowledge and value structures and priorities prevalent in various nations by studying images dissemi- nated through their mass media. If these political climates were known, one could forecast political developments in these nations (de Sola Pool, 1959; Lasswell & de Sola Pool, 1952).

More recently, researchers have looked at coverage of the 1996 Telecommuni- cations Act to detect whether the interests of the corporate owners of the news medium impacted coverage (Gilens & Hertzman, 2000). They have examined the perspectives from which news stories, such as the advent of the euro as Europe’s new currency or public journalism in New Zealand, were written to determine the nature of the “framing” and infer the consequences (McGregor, Fountaine, & Comry, 2000; Semetko & Valkenburg, 2000). And they have analyzed election campaign speeches to detect the presence of specific themes and rhetorical patterns (Benoit, Blaney, & Pier, 2000; Hershey & Holian, 2000).

Message content can also be used to infer the psychological characteristics, be- liefs, motivations, and strategies of political leaders (DeMause, 1986; Winter & Carl- son, 1988). Even when the psychological characteristics remain obscure, valuable inferences can be drawn about power configurations by knowing which political personalities are cited and in what connections their messages are reported.