• Tidak ada hasil yang ditemukan

Credibility and trustworthiness in qualitative interpretive research

4.16 Credibility and trustworthiness in qualitative interpretive research

In regard to the issues of validity and reliability in qualitative research, this section considers first why the preferred terms should instead be credibility and trustworthiness, and goes on to explain how these apply in the study, as they relate to both phenomenology and to phenomenography. In addition, methodological triangulation and reflexive engagement with my own positionality are central threads in further procedures to enhance the validity of both sections of the study.

To ensure the quality of research, questions of validity and reliability must be confronted. A plethora of definitions for each term exists in the literature, making clarity on these aspects a challenge (Creswell & Miller, 2000). However, in the context of qualitative research, these

179

terms have been replaced with other more appropriate terms, such as credibility and trustworthiness, because consistent and replicable results will not be possible, as “human nature is never static” (Merriam, 1998).

Qualitative research is generally conducted in naturalistic contexts, aimed at describing a particular phenomenon or experience, and thus is often unique and impossible to generalise. Generalisability in qualitative research can be interpreted as meaning comparability and transferability, in the sense that the findings from one context could be generalised to another similar area, or the development of theories could be transferred to another context. It is not the researcher’s task, however, to provide “an index of transferability”. By providing sufficiently thick data and description, readers and users of the research should determine whether transferability is possible (Lincoln & Guba, 1985).

“Internal validity” may be seen as akin to the “truth value”, relating to the researcher’s confidence in the truth of the findings for the subjects or informants and the context of the study (Lincoln & Guba, 1985), or the fact that the findings or explanations can be sustained by the data (Cohen et al., 2007, p. 150). This aspect in qualitative research is most often interpreted as “credibility”; i.e., that qualitative researchers need to demonstrate that their studies are credible (Creswell & Miller, 2000) or are aimed at gaining knowledge and understanding of the nature of the phenomenon being studied (Kreftling, 1991). The abundant use of quotations from the data in this study, together with the “cross referencing” between data sets, served to achieved this confirmatory function in the analysis.

Validity attaches to the accounts, or the meanings and inferences drawn from the data, and the “utmost fidelity” is required from the qualitative researcher in the self-reporting of the researched (Blumenfeld-Jones, 1995). Lincoln and Guba (1985) propose methods such as prolonged engagement in the field, persistent observation, triangulation, peer debriefing, negative case analysis and member checking as strategies to improve the credibility of qualitative research. Winter (2000) recommends addressing this aspect through the honesty, depth, richness and scope of the data achieved, the participants approached, the extent of the triangulation and the disinterestedness or objectivity of the researcher. I believe that my extended engagement in the field, my honesty in disclosing my positioning,

180

the richness and scope of the data elicited through the access I have had to participants who have enriched the quality of the data all provide evidence of my “utmost fidelity” and reinforce the validity of the study.

Denzin’s (1997) “methodological triangulation” was employed to compare data from the (phenomenological) interviews with the Deans, with the curriculum documents from the various universities, so as to focus on the same object of study, the law curriculum as it has been implemented, from different perspectives. The preliminary baseline curricular comparison was used to develop the interview schedules for the Deans’ interviews, during which I was able to confirm or refute the data in the comparative table. Similarly, the data from the ex-Deans (Data Set 2: Task Group members) often aligned closely with and confirmed the data from the current Deans (Data Set 3). In the phenomenographic interviews, the same “cross referencing” about various aspects of the data was possible between the data elicited from graduates and the data elicited from their employers. If findings are said to be “artefacts of a specific method of data collection”, then the use of contrasting methods reduces the chances of any consistency being attributable to similarities of method (Lin, 1976).

Specifically in phenomenographic research, the emphasis is placed on communicative and pragmatic validity (Kvale, 1996). The former concept relates to the researcher being able to argue for a defensible interpretation, selected from a range of possible interpretations, and based upon appropriate research methods and an appropriate final interpretation which is adjudged by the research community or the intended audience for the findings (Kvale, 1996;

Uljens, 1996). Booth (1992) states that the validity of phenomenographic studies concerns the researcher’s justification for presenting the outcome space and claims based on those results as credible and trustworthy.

Cope (2000) adds that the justification of validity lies in a full and open account of a study’s methods and results, whilst the judgement of credibility and trustworthiness then lies with the person reading the study. He details the factors that should be disclosed fully: the researcher’s background (Burns, 1994); the characteristics of the participants; the design of the interview questions must be justified; the steps taken to collect unbiased data must be included; attempts to approach data analysis with an open mind, not imposing any

181

preconceived notions must be described; the data analysis methods should be carefully detailed; the process for checking and controlling the interpretation must be described by the researcher (Sandberg, 1997) and the results must be presented in such a way, illustrated by selected quotes to support the descriptions of the categories, so as to permit informed scrutiny (Cope, 2004). Each of these considerations has been specifically addressed at the various stages in the phenomenographic study.

Sandberg (1997) has proposed the practice of “interpretive awareness”, in which the researcher describes how she has controlled and checked her interpretation throughout the process as a means of enhancing validity: this requires researchers “to acknowledge and explicitly deal with our subjectivity through the research process instead of overlooking it”.

This guideline was followed as closely as possible at each stage in the process.

Pragmatic validity checks in phenomenographic research include the extent to which the research outcomes are regarded as useful (Kvale, 1996) and how meaningful they are in providing “knowledge” to their intended audience, such as providing “insights into more effective ways of operating in the world” (Marton & Booth, 1997). Entwhistle (1997) explains that “*f+or researchers in higher education, however, the test is generally not its theoretical purity, but its value in producing useful insights into teaching and learning”.

This has been particularly important for me, since I have already been asked to participate in and comment critically on the future projections for legal education through SALDA deliberations and my role in the research project with the CHE on the LLB curriculum. I have assisted their researchers by directing them to relevant literature and presenting my work- in-progress to two conferences already.

Reliability has been variously defined as: “reproducibility of the measurements…stability”

(Lehner, 1996). Reliability in phenomenographic studies does not mean replicability of results, because this would mean that categories in the outcome space could be replicated, when it is unlikely that two researchers, given the same set of data, would interpret it according to the same categories. Johansson, Marton & Svensson comment that although broad methodological principles are followed,

the open, explorative nature of data collection and the interpretive nature of data analysis mean that the intricacies of the method applied by different

182

researchers will not be the same. Data analysis, in particular, involves a researcher constituting some relationship with the data (1985).

A form of reliability check that can be carried out is, once again, for the researcher to make her interpretive steps clear to readers, fully detailing steps in the interpretive stages and providing illustrative examples (Kvale, 1996). This prescription was followed, by the adoption of a critical approach toward my own interpretations, and consistently holding in check my own assumptions and presuppositions (Åkerlind, 2002). Not only do these practices enhance the reliability aspect, but they add to the depth, detail, verisimilitude and richness of the study.

The sources of the data in the second phase were from interviews with graduates, as well as interviews with their employers. The aim of using these two sources of data was to enable me to compare the graduates’ perceptions of the how the law curriculum prepared them for professional practice with their employers’ perceptions of the same issue. Thus, I employed the technique of crystallisation, which Merriam (1998) recommends, a term that Richardson prefers to “triangulation” in qualitative research, because

the crystal combines symmetry and substance with an infinite variety of shapes, substances, transmutations, multi-dimensionalities and angles of approach.

Crystals grow, change, and alter, but are not amorphous….Crystallization provides us with a deepened, complex, thoroughly partial understanding of the topic (Richardson, 1994, p. 934).

By reviewing the data constantly, through the theoretical lens I had selected, interpreting reflexively, and paying attention to the “multiple constructed realities” (Lincoln & Guba, 1985), I was able to develop insights and an interpretation that provided an understanding of the topic in a multi-facetted way, which also served to validate the research.