Thorne (2000) shares that in each approach to qualitative data analysis, there is a different purpose for and different process used to draw conclu- sions. In grounded theory, the process for analyzing data is labeled constant comparative method. Using this process, the researcher compares each new piece of data with data previously analyzed. Questions are asked each time relative to the similarities or differences between each compared piece of data. The ultimate goal is the development of a theory about why a particu- lar phenomenon exists as it does. What is the basic social-psychological process that is occurring? For a full description of the process, the reader is directed to Glaser and Strauss (1967).
In phenomenology, the process of interpretation may vary based on the philosophic tradition used. Regardless of the specific tradition, they all sup- port “immersing oneself in data, engaging with data reflectively, and gener- ating a rich description that will enlighten a reader as to the deeper essential structures underlying the human experience” (Thorne, 2000, p. 69).
For ethnographers, the focus of data analysis is to offer a description of a culture based on participant observation, interviews, and artifacts.
“Ethnographic analysis uses an iterative process in which cultural ideas that arise during active involvement ‘in the field’ are transformed, translated or represented in written document” (Thorne, 2000, p. 69). The researcher asks questions, analyzes the answers, develops more questions, and ana- lyzes the answers in a repeating pattern until a full picture of the culture emerges.
Regardless of the methodological approach used, the goal of data analy- sis is to illuminate the experiences of those who have lived them by sharing the richness of lived experiences and cultures. The researcher has the respon- sibility of describing and analyzing what is presented in the raw data to bring to life particular phenomena. It is only through rich description that we will come to know the experiences of others. As Krasner (2001) states,
“stories illuminate meaning, meaning stimulates interpretation, and inter- pretation can change outcome” (p. 72).
‘evidence.’” (Nelson, 2008, p. 316). Equally important is the need to con- stantly question the predominant paradigm’s structure and function. Dualist thinking does not advance nursing knowledge, nor does it add substantially to what we know about the people we care for or the lives they lead.
Advocacy for being open to alternative ways of knowing is essential. Emden and Sandelowski (1999) offer as an important criterion for addressing rigor in qualitative research a “criterion of uncertainty” (p. 5). This criterion pro- vides for “an open acknowledgement that claims about research outcomes are at best tentative and that there may indeed be no way of showing otherwise” (p. 5).
At the outset, it is important to state that “no one set of criteria can be expected to ‘fit the bill’ for every research study” (Emden & Sandelowski, 1999, p. 6). Further, it is important to recognize that, ultimately, our deci- sions regarding the rigor in a research study amount to a judgment call (p. 6). With these two assumptions in mind, rigor in qualitative research is demonstrated through researchers’ attention to and confirmation of infor- mation discovery. The goal of rigor in qualitative research is to accurately represent study participants’ experiences. There are different terms to de- scribe the processes that contribute to rigor in qualitative research. Guba (1981) and Guba and Lincoln (1994) have identified the following terms that describe operational techniques supporting the rigor of the work:
credibility, dependability, confirmability, and transferability.
Credibilityincludes activities that increase the probability that credible findings will be produced (Lincoln & Guba, 1985). One of the best ways to establish credibility is through prolonged engagement with the subject matter. Another way to confirm the credibility of findings is to see whether the participants recognize the findings of the study to be true to their experiences (Yonge & Stewin, 1988). The act of returning to the informants to see whether they recognize the findings is frequently re- ferred to as member checking. Creswell (2003) offers that member checking should be used “to determine the accuracy of the qualitative findings through taking the final report or specific descriptions or themes back to participants and determining whether these participants feel that they are accurate” (p. 196). Important always to the process of assuring credibility using member checks is the importance of weighing a respondent’s com- ments against the larger pool of informants. McBrien (2008) states,
“member checking can provide correlating evidence to support the truth- fulness and consistency of the findings; however, on the other hand, an over reliance on member checking can potentially compromise the signif- icance of the research findings” (p. 1287).
Another method of improving the credibility of the findings is peer debriefing. Significant debate has arisen around this concept. Peer debrief- ing has been described by Lincoln and Guba (1985) as “a process of expos- ing oneself to a disinterested peer in a manner paralleling an analytical sessions . . . for the purpose of exploring aspects of the inquiry that might
otherwise remain only implicit within the inquirer’s mind” (p. 308). Some authors (Morse, 1994; Cutliffe & McKenna, 1999) contend that an inde- pendent colleague has had less contact with the study participants and as such has less ability to judge the adequacy of the interpretation. Despite these objections, “the process may enable the researcher to make reasoned methodological choices and can ensure that emergent themes and patterns can be substantiated in the data” (McBrien, 2008).
Dependability is a criterion met once researchers have demonstrated the credibility of the findings. The question to ask, then, is this: how de- pendable are these results? Sharts-Hopko (2002) submits that triangula- tion of methods has the potential to contribute to the dependability of the findings. Similar to validity in quantitative research, in which there can be no validity without reliability, the same holds true for dependabil- ity: there can be no dependability without credibility (Lincoln & Guba, 1985).
Confirmabilityis a process criterion. The way researchers document the confirmability of the findings is to leave an audit trail, which is a recording of activities over time that another individual can follow. This process can be compared to a fiscal audit (Lincoln & Guba, 1985). The objective is to illustrate as clearly as possible the evidence and thought processes that led to the conclusions. This particular criterion can be problematic, however, if you subscribe to Morse’s (1989) ideas regarding the related matter of satu- ration. It is the position of Morse that another researcher may not agree with the conclusions developed by the original researcher. Sandelowski (1998a) argues that only the researcher who has collected the data and been im- mersed in them can confirm the findings.
Transferabilityrefers to the probability that the study findings have mean- ing to others in similar situations. Transferability has also been labeled “fittingness.” The expectation for determining whether the findings fit or are transferable rests with potential users of the findings and not with the researchers (Greene, 1990; Lincoln & Guba, 1985; Sandelowski, 1986). As Lincoln and Guba (1985) have stated,
It is . . . not the naturalist’s task to provide an index of transferability; it is his or her responsibility to provide the databasethat makes transfer- ability judgment possible on the part of potential appliers (p. 316).
These four criteria for judging the rigor of qualitative research are impor- tant; they define for external audiences the attention qualitative researchers render to their work.
More recently, Pawson et al. (2003) have developed an additional set of criteria to judge the rigor of qualitative research. Although not as widely used as Guba and Lincoln’s (1985) criteria, they offer the reader another alternative. Pawson et al.’s model uses the acronym TAPUPAS: transparency, accuracy, purposivity, utility, accessibility, and specificity. The original work provides a full description of each of the criterion.