• Tidak ada hasil yang ditemukan

4.4. Research quality and rigour

4.4.2. Validity and reliability of qualitative data

130

CONSTRUCT HISTOGRAM SKEWNESS

Learnability -0.551219

Flexibility 0.1530561

Information quality -0.208317

User experience -0.063069

There was a symmetric distribution for efficiency, satisfaction, flexibility, information quality, and user experience, which indicates that most responses were neutral. Learnability, effectiveness, and ease of use were moderately skewed, that is, more respondents found the eModeration prototype to be easier to learn, effective, and easier to use. Based on the histograms and skewness levels, the deviations from normality were not severe; therefore, the assumptions for statistical techniques were satisfied.

131

Credibility, transferability, dependability, and confirmability are viewed as the qualitative equivalents to the quantitative criteria of internal validity, external validity and generalizability, reliability, and objectivity, respectively (Morrow, 2005; Shenton, 2004). The following measures demonstrate the application of rigour in this study.

A pilot study was initially conducted amongst a sample group of three participants to test the validity and reliability of the PD workshops and to refine the data collection strategy by streamlining the activities of the workshops.

The following elements of trustworthiness were ensured.

 Credibility: In research located within the interpretivist paradigm, credibility refers to “the extent to which data and data analysis are believable, trustworthy or authentic” (Kivunja &

Kuyini, 2017, p. 34). Participants should find that the results are a true reflection of their contributions (Yilmaz, 2013). The analytic credibility of the research can be ensured by the researcher providing a coherent argument and discussing all relevant results, even if some results were unexpected (Nowell et al., 2017). The credibility strategies, namely peer debriefing, data triangulation, member checks, negative case analysis, and purposive sampling used in this study are elaborated on below:

• Peer debriefing: feedback was sought from the research supervisors to test insights based on the analysis of the data. The findings were presented to a few peers. Their feedback assisted in improving the quality of the research. Additionally, peer perceptions were sought when developing the conclusion of the study.

• Data triangulation was used by incorporating different research instruments to capture multiple perspectives (Morrow, 2005). This strategy reduced bias, enabling the researcher to cross-examine the integrity of responses. Additionally, individual viewpoints were verified against others, thus allowing the researcher to construct a rich picture of attitudes and behaviours from a range of different people (Fossey et al., 2002).

• Member checks were used to include the voices of respondents in the analysis and interpretation of the data collected to eliminate researcher bias (Thomas, 2010).

132

• Negative Case Analysis refers to the researcher reporting on data that “contradicts the researcher’s expectations” (Anney, 2014, p. 277).

• Negative case analysis was used to improve the rigour of the study by providing plausible alternative explanations for any contradictions.

• Purposive sampling was used to focus on participants who are knowledgeable of the issue under investigation, thus ensuring more in-depth findings (Palinkas et al., 2015; Teddlie &

Yu, 2007).

 Transferability: Kivunja and Kuyini (2017) recommend that the researcher provides sufficient detail about the context of the study and their findings, so that others may relate the findings to their specific contexts. Accordingly, transferability was facilitated by providing a thick description of the context within which the study was carried out and through the use of purposeful sampling. Information on the demographics of participants, the data collection methods used, the number and duration of data collection sessions, and the time period over which data was collected were recorded to allow other researchers to assess the extent to which the findings may hold true in other settings.

 Dependability ensures that the processes used to derive the findings are made explicit. It is important that the process is repeatable and consistent across “time, researchers and analysis techniques” (Morrow, 2005, p. 252). Saunders et al. (2019) argue that, when using an interpretivist paradigm, the focus of the research will most likely change as the research progresses. Ensuring dependability in this situation requires that the researcher records all changes so that a dependable account of the research focus is provided. Accordingly, the research design and its implementation was reported on, methods of data collection and analysis were meticulously explained, and analytic memos were created in Atlas.ti to record the codes, categories, and themes (Morrow, 2005; Saunders et al., 2019).

 Confirmability requires an acknowledgement that research is never truly objective. The integrity of the findings lie in the data and should represent the “situation being researched”

(Morrow, 2005, p. 252). Confirmability was addressed by triangulation to ensure that the findings represent participants’ experiences and ideas rather than those of the researcher. A

133

diagrammatic audit trail was presented to trace the course of the research (Shenton, 2004;

Anney, 2014).

 Authenticity was ensured by including a range of voices, together with dissenting views, in participants’ own words to further explain the researcher’s interpretations (Fossey et al., 2002).

 Adequacy: the researcher must collate the data, analysis, and findings in such a way that the reader is able to validate the adequacy of the findings (Nowell et al., 2017). All raw data were saved on a password-protected machine in suitably named folders. The date on which the data was collected was recorded to create an audit trail and to confirm the data analysis and interpretations. Referential adequacy was tested by reviewing the raw data and comparing it to the developed themes to ensure that all conclusions were corroborated by the data (Nowell et al., 2017).