• Tidak ada hasil yang ditemukan

causal mechanism, usually some kind of microfoun- dation, are capable of serving as a basis for this kind of deduction. This ensures that case studies become embedded in the fundamental theoretical debates within the social sciences. The quality of a case study, thus, does not depend on providing detailed evidence for every step of a causal chain; rather, it depends on a skillful use of empirical evidence for making a con- vincing argument within a scholarly discourse that consists of competing or complementary theories.

The adequate structure for documenting case study findings is chronological for naturalists, linear–analytic for positivists, and comparative for constructivists.

Joachim K. Blatter

See alsoConstructivism; Generalizability; Historical Research; Interpretive Research; Narrative Analysis;

Naturalistic Inquiry; Positivism

Further Readings

George, A., & Bennett, A. (2005).Case study and theory development in the social sciences.Cambridge: MIT Press.

Gerring, J. (2007).Case study research: Principles and practices.Cambridge, UK: Cambridge University Press.

Gomm, R., Hammersley, M., & Foster, P. (Eds.). (2000).

Case study method: Key issues, key texts.London: Sage.

Stake, R. (1995).The art of case study research.Thousand Oaks, CA: Sage.

Yin, R. (2003).Case study research: Design and methods (3rd ed.). Thousand Oaks, CA: Sage.

process. As more researchers gain a growing apprecia- tion for qualitative methods, investigators from differ- ent institutions, disciplines, countries, and cultures will form more collaborative efforts that require multiple analyses examining concepts at the categorical level.

Denise O’Neil Green

See alsoATLAS.ti (Software); Codes and Coding; NVivo (Software); Themes

Further Readings

Richards, L., & Morse, J. (2007).Read me first for a user’s guide to qualitative methods(2nd ed.). Thousand Oaks, CA: Sage.

C

ATEGORIZATION

Categorization is a major component of qualitative data analysis by which investigators attempt to group patterns observed in the data into meaningful units or categories. Through this process, categories are often created by chunking together groups of previously coded data. This integration or aggregation is based on the similarities of meaning between the individually coded bits as observed by the researcher. Categories in turn may be abstracted or conceptualized further to discern semantic, logical, or theoretical links and con- nections between and across the categories. The results of this process may lead to the creation of themes, con- structs, or domains from the categories.

Categories can also be seen as an intermediary step in an ongoing process of separating and connecting units of meaning based on the qualitative data being collected. Coding is often the first step in the analytic process as researchers attempt to make meaning of the various bits of information collected in the field or generated during interviews. As a second step in the ongoing process, researchers look for connections between or among these separate codes. This coding of the content can produce categories as researchers discern linking patterns between or among the indi- vidual codes. The analytic process continues as researchers next look for patterns that run through and across the system of categories. The results of this cat- egorization of the categories can lead to the creation of themes, constructs, or domains.

The categorization process encourages researchers to describe overtly what they have observed and to segment the observed phenomena into units. The characteristics or internal properties of the categories are further developed or discovered as researchers continually and transparently note or memo how all coded units of meaning within a particular category are similar and how the coded units within the cate- gory contrast with other coded units perceived as being outside the category in question. Researchers can use a variety of techniques to accomplish this goal, including posing a priori questions from existing theoretical systems (i.e., a deductive approach) and testing the integrity of the categories by constantly judging the credibility of the categories with further observations based on the data (i.e., an inductive approach). Researchers can also use a combination of both inductive and deductive logic in creating and refining categories. The process of categorization con- tinues in a research project until saturation (i.e., no further categories are discovered or constructed based on examination of new generated data) or exhaustion (i.e., the existing system of categories accounts for all meaningful or significant aspects of the phenomenon in question).

In constructing a system of categories, it is impor- tant for researchers to evaluate how each category has internal integrity (i.e., is there a high degree of homo- geneity across the individual coded units within the category?) and external integrity (i.e., is there a high degree of heterogeneity or differentiation between or among the array of homogeneous categories?).

Researchers not only must judge the internal and exter- nal coherence across the system of categories but also must be cognizant of the coherence between the cate- gories and the phenomenon in question. Researchers should endeavor to create an exhaustive system of cat- egories so that no meaningful feature of the phenome- non under study falls outside the array of categories. In such a fashion, the process of categorization operates along dual planes of focus: horizontality (i.e., category- to-category relationships) and verticality (i.e., category-to-phenomenon relationships).

Establishing Categorization Integrity Judging the credibility of the categorization involves posing a number of critical questions. First, how well do the categories capture the richness of data? Second, how 72———Categorization

coherent is the internal constitution of the categories?

Third, how distinct is each category from the other categories? Fourth, how were the categories created and tested? To address these questions of integrity, researchers have developed a number of strategies to help themselves and external reviewers to render their assessments more readily and effectively.

Researchers should carefully document all analytic decisions that lead to the creation of categories. These documents or memos help to form what is commonly called an audit or a decision trail that provides evi- dence to support the integrity of the coding, catego- rization, and interpretive choices made throughout the qualitative data analysis process. Researchers can fur- ther improve the content and credibility of their studies by opening up their categorization records for verification from external or third parties in the form of peer review (i.e., to independent referees) or mem- ber checking (i.e., to participants who provided the material on which the analysis is being conducted).

Marc Constas developed a comprehensive system for helping researchers to document the category cre- ation process for internal and external review. His sys- tem consists of three components: origination (i.e., where does the responsibility reside for the creation of the categories?), verification (i.e., how are the cate- gories justified?), and nomination (i.e., what are the sources of categories’ names?). He also asked that researchers share when these decisions were made (i.e., before the data collection began, after the data were collected, or throughout the data collection process).

Another important process in the construction of categories and the establishment of their credibility is to systematically maintain the connections between the codes and categories and the empirical evidence found in the data themselves. Exemplary quotations and excerpts should always remain in contact with their respective codings and categorizations. This con- tact should also be extended to the publication of the findings so that editors, reviewers, and readers can judge the merits of any categorization based on arti- facts from the phenomenon in question (e.g., direct quotations). If done well, the juxtaposition of well- articulated descriptions of categories with rich and vivid exemplary quotations or observations can create a credible account of the findings of a study and a meaningful contextualization of both the categories and the data they are offered to represent.

Ron Chenail

See alsoCodes and Coding; Constant Comparison;

Content Analysis; Core Category; Themes

Further Readings

Constas, M. A. (1992). Qualitative analysis as a public event:

The documentation of category development procedures.

American Educational Research Journal, 29,253–266.

Miles, M. B., & Huberman, A. M. (1994).Qualitative data analysis: An expanded sourcebook(2nd ed.). Thousand Oaks, CA: Sage.

Patton, M. Q. (2002).Qualitative research and evaluation methods(3rd ed.). Thousand Oaks, CA: Sage.

C

ENTER FOR

I

NTERPRETIVE AND

Q

UALITATIVE

R

ESEARCH

The Center for Interpretive and Qualitative Research (CIQR, pronounced “seeker”) at Duquesne University is special both for how it began and for what it does.

A number of faculty members were aware that a large percentage of the scholars in the liberal arts, health sciences, education, and other schools at Duquesne used interpretive and qualitative methods in their research. Moreover, the university’s psychology department already had an international reputation for its PhD program in phenomenological psychology.

Faculty at Duquesne decided that a center devoted to interpretive and qualitative methods would facilitate communication between these faculty members and their students and, in turn, would fulfill a need for intellectual community as well as present information on a variety of interpretive and qualitative research methods. During the summer of 1999, a group of Duquesne scholars wrote a proposal for such a center, including the term interpretive in the title to empha- size that qualitative methods used in literature, philos- ophy, and other humanities departments would be of an importance equal to those undertaken in the social and behavioral sciences. This grassroots effort was aided by the dean of the College and Graduate School of Liberal Arts at the time. After winning approval from the relevant graduate and dean committees, the group was granted “center” status.

The work of the center revolves around several structures. The first of these is a monthly meeting in which faculty and graduate students from Duquesne Center for Interpretive and Qualitative Research———73

and other universities in the area present their work, focusing on their methods as well as the phenomena they are investigating. Typically, a half-hour presenta- tion is followed by an hour of lively discussion in these well-attended meetings. The second structure is an invi- tation to an internationally known scholar each semes- ter. The scholar gives a public talk and then a smaller symposium that concentrates on methodology. The third structure is a CIQR certificate program, where a certificate is offered to those graduate students who take specified method-oriented courses from the gen- eral curriculum at the university and then a special proseminar. The proseminar requires that the students engage in and jointly discuss research projects that they are undertaking. They then present their work to CIQR members at a meeting for that purpose. After only a year, this program had already granted certificates to 10 graduate students and 1 faculty member. The center also plans to engage in community action research.

The CIQR website includes a description of the center, the original proposals for the center and its cer- tificate program, a list of all the CIQR external speak- ers and their topics, a description of all the monthly presentations, announcements of coming events, a newsletter, and a list of the subcommittees along with their members and functions, and a sign-up procedure for those wishing to become CIQR members.

Fred Evans

Websites

Center for Interpretive and Qualitative Research:

http://www.ciqr.duq.edu