• Tidak ada hasil yang ditemukan

Reliability, Validity and Trustworthiness

only. The interviews involved semi-structured and “generally open-ended questions that are few in number and intend to elicit views and opinions from the participants” (Creswell 2014, p. 190). The faculty members and risk management administrators were then requested to answer ten (n= 10) open-ended questions (Appendix 3), to identity the existing ERM and/or QA policies and process applied in their respective institutions, and to define their perceptions regarding the effectiveness of their current and existing ERM and/or QA policies and processes. The interviews were conducted using an electronic device with recording capability, as well as online video call and online meeting applications (i.e., Microsoft Teams and Zoom) after getting consent from the interviewees. The researcher also asked the questions orally, with the answers recorded and then coded in writing at a later stage. All the interviews were conducted in the English language, and then transcribed verbatim by the researcher into archived texts.

The main themes that informed the interviews were very similar to the themes that informed the process of document analysis, since both phases of the study were directed to answer the same research questions.

The thematic categories derived from the interviews covered the three major conceptual subject areas of this study: ERM adoption, implementation and integration, as reflected and practised in the available ERM policies and manuals in the respective HEI. Added to these themes, the major RQ1 also informed the researcher of the requirement to gather perceptions from the study respondents on their existing ERM and/or QA policies and processes, and how they are using them as indicators of the effectiveness of their academic processes.

researcher adopting this validation strategy were first to give the researcher freedom to reflect on the validity of each method instrument on its own, and second because the study’s main purpose as well as the major RQ1 are addressed and answered mainly through a quantitative tool and deal with numbers as predetermined facts. This gave the researcher more freedom to focus on quantitative validity.

Additionally, this study is mainly deductive and objective in nature, where the researcher’s reliance on numbers and statistics is a crucial factor to the findings of the quantitative study, whereas the instrumentation and findings of the qualitative study suggest different kinds of “credibility and confirmability” criteria (Cohen, Manion & Morrison 2018, p. 248).

3.7.1 Quantitative Data Reliability and Validity

For the quantitative data, relying on multiple validity techniques and tests, the researcher used reliability, which means that the research approach adopted by the researcher is consistent across different research and projects (Gibbs 2007; Creswell 2014). According to Cohen, Manion and Morrison (2018, p. 245), reliability is inherent in questionnaires as an instrument of quantitative data collection, providing the questionnaire with an advantage “over interviews, for instance”, since it tends “to be more reliable; because it is anonymous, it encourages greater honesty”. As defined by Fraenkel and Wallen (2015, p. 154), reliability is “the consistency of the scores obtained – how consistent they are for each individual from one administration of an instrument to another and from one set of items to another”.

According to them, data are considered reliable if others using the same data collection method at different times but under similar conditions would get the same results (Ibid., p. 145). In this sense, in this study reliability is achieved when examining and comparing the results of the data analyses first from the perspective of different HEIs as the subject of this study, and second from the perspective of two different layers of respondents, namely the academic administrators and faculty members. However, since this study does not involve empirical tests, hypotheses or experiments, reliability in the standard sense of testing the “consistency or stability of test scores” is not used (Creswell 2014, p. 155). In the quantitative phase of the data collection, the researcher relied mainly on the reliability of the scores and numbers obtained from the questionnaire in order to obtain meaningful interpretation of the data (Ibid.).

As stated earlier, the researcher adopted multiple validity strategies to validate the instrument reliability and findings’ validity of each of the two major phases of the study. Defined by Cohen, Manion and Morrison (2018, p. 245) as “a demonstration that a particular instrument in fact measures what it purports to measure”, the validity of quantitative data in this study is tested through different means, the most important of which is reliance on a similar survey tool from previous proven research, as mentioned

earlier in the study. The questionnaire used in this study was based on a survey that was tested, proven valid and conducted in the field of ERM in several United States of America (USA) universities, and administered to 124 respondents by Lundquist (2015). The questions in the questionnaire were modified from this study, which showed reliable results with regards to the maturity levels that indicate the significance of ERM adoption and implementation in USA HEIs. Lundquist’s (2015) results relied on data coming from thirty-seven (n= 37) administrators from the different universities who responded positively to the majority of the questionnaire items. The results indicated that the majority of her questionnaire items (n= 15):

were rated in the second maturity level, ranging from 2.0 to 2.7. Items in the higher end of the developing level (2.5 – 2.7) indicate that IHEs are experimenting with ERM and that the risk strategy and framework is still under development. While senior administration and boards have an awareness of risk management, the understanding of risk management is limited to a small number of experts on campus who see risk management as essential to achieving the IHE’s objectives (Ibid., p. 86).

The researcher revised some of the items in the questionnaire and ran them through a pilot test procedure with some major respondents in order to enhance the validity and reliability of the survey tool and obtain optimal results convenient for the UAE higher education context. For example, Lundquist (2015, p. 82) asked the following open-ended question in the context of her quantitative survey: “how do you know if implementation of the ERM framework has reduced, mitigated, or controlled risk, created opportunity, enhanced financial viability and/or resulted in other positive factors?”. Whereas the researcher, upon consultation with the expert respondents during the pilot study, proposed to move this question to the interviews (Interview Q7) based on the fact that it is open-ended and suited the qualitative inquiry more than the quantitative. Other similar examples are identified in moving or removing some of the questions used by Lundquist (2015) in the survey, which were identified by the researcher to be more of the qualitative type than suitable for a quantitative research instrument. One example is Q49, where Lundquist (2015, p. 193) asked the respondents to “list other higher education institutions with ERM programs with whom [they] have consulted or collaborated in the development of [their] ERM program”.

Other means of validity checking included careful sampling, appropriate instrumentation and

“appropriate statistical treatments of the data” during the data analysis stage (Cohen, Manion & Morrison 2018, p. 267). These appropriate statistical treatment measures used by the researcher included using the appropriate sample by focusing on their professional expertise and knowledge repository, and hence avoiding using inferential statistics for this type of research question since no inferences, hypotheses or assumptions, or causal relations were included or discussed by the researcher. The researcher based his sample and data collection instrumentation, as well as data analysis techniques, on the premise of avoiding inferences or generalisations beyond what the provided data might be capable of in supporting

the findings of the study. The researcher started by checking the validity of the questionnaire through piloting among a convenience-based distributed sample of one participant from each selected HEI to check, as stated before, the instrument’s reliability and make enhancements and changes to the questions based on respondents’ feedback and responses.

Ridenour and Newman (2008) stated that establishing validity in mixed-methods research would involve connecting the research purpose, questions, and methods to reach what they called “the truth value”. This connection of the research purpose, questions and methods was taken into careful consideration throughout the study when the researcher based the questionnaire instrument (as well the interviews) on the research questions (RQ1, RQ2 and RQ3), as well as the Conceptual Framework adopted by the researcher (ERM Adoption–Implementation–Integration–Effectiveness). However, the validity of scores is essential to quantitative research: “As with all mixed methods studies, the researcher needs to establish the validity of the scores from the quantitative measures and to discuss the validity of the qualitative findings” (Creswell 2014, p. 225). The validity of the scores obtained from all completed questionnaires, as well as the validity of the questionnaire itself as an instrument, was checked through means of content validity and internal and construct validity checks.

For quantitative data, the internal validity is essential to determine the soundness of the conclusions and numbers reached in the quantitative study by checking that the concepts being studied are accurately measured (Fraenkel & Wallen 2015). To achieve that, the researcher conducted the Cronbach’s Alpha test several times and revised the questionnaire items until the internal validity and consistency of the survey questionnaire items were achieved and guaranteed. Cohen, Manion and Morrison (2018, p. 270) defined Cronbach’s Alpha as “an alternative measure of reliability as internal consistency is the Cronbach alpha, frequently referred to simply as the alpha coefficient of reliability, or simply the alpha”. In other words, the Cronbach Alpha test proved that the scales of items of the questionnaire shared “covariance”

elements and tended to measure the same “underlying concepts” (Cohen, Manion and Morrison 2018).

After several revisions of the survey instrument, the Cronbach’s Alpha value in this study was equal to (0.823), which according to educational and science researchers means the items of the questionnaire are highly reliable and as stated share covariance elements as well as tend to define the same underlying concepts (Fraenkel & Wallen 2015; Cohen, Manion & Morrison 2018):

Table 3.6 – Reliability Test Results of the Questionnaire Reliability Statistics

Cronbach’s Alpha Coefficient No. of Items

The researcher also relied on previously attested and accredited research and studies in the field as a major source of questionnaire items, and then piloted the questionnaire in order to seek specialised feedback and improvements. The researcher searched closely to see how some components might be drawn on from different validated questionnaire instruments to build the reliability and validity case for the study.

3.7.2 Qualitative Data Validity and Trustworthiness

For the qualitative data, the researcher used content validity (Johnson and Christensen 2014; Cohen, Manion & Morrison 2018) to test if the document analysis and interview data were “plausible, credible, trustworthy, and therefore defensible” (Johnson & Christensen 2014, p. 207). Defined by Fraenkel and Wallen (2015, p. 148) as “the appropriateness, correctness, meaningfulness, and usefulness of the specific inferences researchers make based on the data they collect”, the validity of the data collected through qualitative instruments achieves meaningfulness in the trustworthiness of the respondents’ responses.

The researcher’s adoption of these techniques was in a sense driven by the fact that, as a researcher, there is always the need to endeavour to perfect the study and culminate its findings by not only testing the quantitative internal validity of numbers, but also the trustworthiness factors of qualitative data.

As stated earlier, the qualitative phase of this study relies on triangulation for data validity purposes, where the researcher attempted to provide “a confluence of evidence that breeds credibility” (Eisner 1991, p. 110). Johnson and Christensen (2014) argued that one of the most important reasons why some qualitative research is better than other qualitative research is the adoption of reliability and validity in the former, and their absence in the latter, whereby “most qualitative researchers argue that some qualitative research studies are better than others, and they use the term validity or trustworthiness to refer to this quality difference” (p. 298). In this study, the researcher used two tests to check on the validity of the qualitative data: interpretive validity, “portraying accurately the meanings attached by participants to what is being studied by the researcher” or what is called the deductive method of data interpretation, and participant feedback, “discussing the researcher’s conclusions with the study participants”, as techniques to assure the validity of data is adhered to (Johnson & Christensen 2014, p.

300). While carrying out the former technique, the researcher managed to understand the inner world and minds of the academic participants, interpreting their reactions and responses to the interview questions, discussions and observations, and reflecting them through themes and categories in the analysis report.

For the latter, the researcher shared the questions of the interviews for review and feedback by two expert participants to ensure their validity. The researcher also shared with the expert participants his viewpoints

and interpretations from previous literature and theoretical reviews about the subject in order to seek and obtain their tested feedback and experiences, and thus enhance the questions of the study for better answers.

Strauss and Corbin (1998) and Fraenkel and Wallen (2015) used the term trustworthiness in qualitative studies to refer to both the creditability and validity of qualitative data. “Trustworthiness and its components replace more conventional views of reliability and validity” (Cohen, Manion & Morrison 2018, p. 279). Several strategies were adopted in the data analysis process to enhance and boost the strength and trustworthiness of the document analysis and interview findings. In order to test the trustworthiness of the questions in the interviews, the researcher sought the review and advice of some informed risk management practitioners and professionals from outside the academic field, who functioned as external validators.

In summary, the researcher used multiple and different measures to enhance the reliability and validity of the study. These measures included the adoption and application of the most notably recognised research methodology (mixed-method approach), the inclusion of and reference to previous proven studies using the same mixed-method research design and conducted in multiple contexts, a reference to the opinion of a group of respondents experts in the field of ERM and academic effectiveness, and finally the comparison of the findings of this study with existing literature and established theory (Miles, Huberman & Saldaña 2014).

3.8 Analysis Methods Implementation

Garis besar

Dokumen terkait