CHAPTER SIX DISCUSSION
6.8. Credibility of the research
This research is evaluated according the criteria of Silverman (2001), who argues that credible qualitative research should be judged according the validity and reliability of the knowledge it produces. Not all qualitative researchers agree on this, with some
proposing a new terminology for standards of qualitative research - credibility, verifiability, transferability (Ulin et al., 2002), and others from a feminist research paradigm refuting any claims of'scientific objectivity' (Silverman, 2001). It is argued however, that although qualitative research has a different logic from positivist
quantitative research, it may still be subject to rigorous and critical analysis in terms of the validity and reliability constructs (Silverman, 2001).
(a) Reliability of the interviews
The key criterion for reliable interview data and findings is low-inference description, that is, direct engagement with the verbal data rather than in the researcher's
reconstructions or personal perspectives (Silverman, 2001). The interview data was reliable in terms of the following three aspects (Silverman, 2001). First, all interviews were audio-taped and each recording listened to several times by the researcher. Second, there was systematic transcription of the tapes 'according to the needs of reliable
analysis' (Silverman, 2001). 20 interviews and 1 focus group discussion were transcribed verbatim with selections used from the remaining 9 interviews, on the basis that the role of'peripheral' data was to confirm or disconfirm the 'core' data of the study (Silverman, 2001). Third, long extracts from the data have been presented in the research report so that the words of the interview participants remain the basis for discussion. In terms of the second criterion, the transcription of the data from School B was done slightly differently from School A. The researcher transcribed interviews for School A whereas for as it involved translation which then rendered some transcription conventions less
meaningful, such as marks for pauses, emphasis and so on. This shortcoming was not considered so serious as to call reliability into question, however, as even the interview data remained meaningful although it is noted that the data for School B would have been richer had these conventions been included. See Appendix 2 for transcription codes.
(b) Reliability of the visual data
The photographic data may be considered textual data and was analysed through a quantitative content analysis. The two key issues of reliability with textual data were the precision of category definitions and the accurate counting of category instances in a
standardized way (Silverman, 2001). Reliability was enhanced by doing the content analysis, generating a set of emergent findings and then completely reworking it with more accurate definitions of categories and systematic counting.
(c) Validity of the findings
Silverman (2001) argues that although it is optimal to use multiple approaches to data collection and analysis, validity through the triangulation of data drawn from different methods may be limited. However, it is argued that in this study the quantitative and qualitative methods were complementary. Although each method produced different kinds of data, which would be expected as content analysis and narrative analysis are different perspectives drawn from different theories, it is argued that the 'conversation' between the two sets of data enhanced validity by making better sense of each other. It is argued that the mixed method approach also enhanced the quality of comprehensive data treatment (Silverman, 2001), such that 'anecdotalism' was avoided by providing
illustrative examples from throughout the data set and presenting a corpus of data in the content analysis findings.
The design of the research lent itself well to respondent validation (Silverman, 2001) in the sense that meanings of the visual data were verified by the participants during the interview. Further respondent validation could have been included by then taking the
findings back to the participant sample. This did not take place within the resources available for this research and may represent a limitation to the validity of the study.
Appropriate tabulations also enhance validity by avoiding the selection of fragments of data (Silverman, 2001). The tabulation of sample characteristics and content analysis findings provided a sense of the illustrative data in relation to the complete data set.
Deviant case analysis was employed during the study and was integrated with the
purposive sampling strategy where there were planful attempts to recruit participants who might challenge the findings, as recommended by Silverman (2001). Examples of such deviant case analysis are the contrasts provided between focus group and interview responses or contrasting perspectives of two participants on the same topic, for example Sibusiso and Nkosinathi on the subject of school sport. The search for tropes in the narrative analysis also added both to deviant case analysis and constant comparison (Silverman, 2001) as finding the similar or differing tropes across the narratives guided the researcher to a comparison for similarity or difference among participants. Moving from a smaller to a larger data set may also support validity (Silverman, 2001). For this research, this occurred as purposively expanding the initial sample on the basis of
questions and trends that emerged from the data. Constant comparison and deviant-case analysis means a moving to and fro between different aspects of the data (Silverman, 2001). It is argued that this occurred effectively in the 'dialogue' between the content analysis and narrative analysis. It also occurred in terms of the replication of the process in two research settings.
(d) Further aspects of credibility
Prolonged engagement and persistent observation is considered an indicator of credibility in qualitative research (Creswell, 1998). In this research participants were engaged in a prolonged process that meant meeting with researchers on several different days during the course of the data collection. All participants had at least three meetings with the researchers during the course of the study. Peer review and debriefing may also enhance credibility (Creswell, 1998). Debriefing and discussion among the researcher and interviewers offered peer supervision and reflexivity. The researcher also spent time
debriefing and discussing with two supervisors and was also concurrently involved as an interviewer in a study of adolescent masculinity among the visually impaired, an extra involvement that provided an opportunity for peer discussion and reflexivity.
Clarifying researcher bias is an important aspect of credibility (Creswell, 1998). As the researcher I was aware that my identity as a privileged middle-class white male was important and, over the course of the research my understanding of gender and cultural processes has shifted on the basis of prolonged engagement with the subject matter and the study itself. Thus, in listening to the interviews now I can in retrospect identify some masculinist biases and implicit 'blind spots' in relation to cultural diversity that would not be present were I to do the research now, having 'grown with the study'.
External audits and rich, thick description are an aid to credibility (Creswell, 1998). The presentation of preliminary findings at a conference midway through the research
allowed for critical review and the development of the content analysis. A journal article was written on the basis of preliminary findings from the narrative analysis and its review by supervisor, co-supervisor and submission for publication provided a useful audit of the process and its product, especially in terms of thinking critically about the relevance of the study. Presenting emergent findings is considered a useful way to enhance credibility of qualitative research (Silverman, 2000). The appended illustrative data and narrative analysis schematics are included to enrich the description and retain voice-centredness in the findings. Findings were framed as far as possible in the actual words of the
participants to enhance what may be called low-inference description (Silverman, 2001).