• Tidak ada hasil yang ditemukan

Analysis Methods Implementation

and interpretations from previous literature and theoretical reviews about the subject in order to seek and obtain their tested feedback and experiences, and thus enhance the questions of the study for better answers.

Strauss and Corbin (1998) and Fraenkel and Wallen (2015) used the term trustworthiness in qualitative studies to refer to both the creditability and validity of qualitative data. “Trustworthiness and its components replace more conventional views of reliability and validity” (Cohen, Manion & Morrison 2018, p. 279). Several strategies were adopted in the data analysis process to enhance and boost the strength and trustworthiness of the document analysis and interview findings. In order to test the trustworthiness of the questions in the interviews, the researcher sought the review and advice of some informed risk management practitioners and professionals from outside the academic field, who functioned as external validators.

In summary, the researcher used multiple and different measures to enhance the reliability and validity of the study. These measures included the adoption and application of the most notably recognised research methodology (mixed-method approach), the inclusion of and reference to previous proven studies using the same mixed-method research design and conducted in multiple contexts, a reference to the opinion of a group of respondents experts in the field of ERM and academic effectiveness, and finally the comparison of the findings of this study with existing literature and established theory (Miles, Huberman & Saldaña 2014).

3.8 Analysis Methods Implementation

The data analysis method used in the quantitative section is mainly descriptive statistical in nature.

Descriptive statistical data analysis suits the nature of survey convenience sampling and allows the researcher to describe the information contained in and obtained from scores and numbers (Fraenkel &

Wallen 2015). This is typical of the overall data analysis in the quantitative section of the study.

“Sometimes simple frequencies and descriptive statistics may speak for themselves, and the careful portrayal of descriptive data may be important” (Cohen, Manion & Morrison 2018, p. 727). The researcher aimed to follow this data analysis design in order to “describe, summarize, or make sense of a particular set of data” (Johnson & Christensen 2014, p. 528). In this sense, in order to answer RQ1, the researcher arranged the data obtained from the questionnaire in more interpretable formats, such as frequency distributions, defining the mean and median, and including visually illustrated figures, bar graphs and descriptive charts and tables for better interpretation and representation of the data.

For the analysis of the quantitative data obtained from the questionnaire, the researcher used several descriptive statistical concepts and tests devised from SPSS, since SPSS “can generate results and report them back to the researcher as descriptive statistics or as graphed information” (Creswell 2014, p. xx).

The descriptive statistical analysis included the mode, the mean, the median, the minimum and maximum scores, the range, the variance and standard deviation, and the standard error. Additionally, a number of statistical tests were used by the researcher to analyse and test the survey data using SPSS, such as the Cronbach’s Alpha coefficient for the reliability test, and the non-parametric Mann-Whitney U test to show the bivariate relationship between major variables (i.e., public vs. private universities). Fraenkel and Wallen (2015, p. 233) defined the Mann-Whitney test and justified its use as “a nonparametric alternative to the t-test used when a researcher wishes to analyse ranked data. The researcher intermingles the scores of the two groups and then ranks them as if they were all from just one group”. Even though some literature proved the advantage of using non-parametric tests as “being tailored to particular institutional, departmental and individual circumstances” (Cohen, Manion and Morrison 2018, p. 565) and is the most widely used non-parametric “equivalent of the independent t-test”, there is still good evidence in education and business literature for researchers to “believe that non-parametric tests have less power than their parametric counterparts” (Field 2009, p. 540). To conduct bivariate comparisons of the major variables identified in the quantitative data, the researcher used the Mann-Whitney test (non- parametric test) because the data from the questionnaire did not meet the conditions required for a parametric analysis, such as the T-test. This test has similar functions as the T-test, but with varied power of presentation. According to Field (2009, p. 344), “the t-test can be biased when the assumption of normality is not met” and most importantly when the data is not randomly obtained. In other words, the

data collected by the survey questionnaire in this study does not have the characteristics of parametric quantitative data, namely 1) normal distribution, 2) homogeneity of variance, 3) interval measure of data between test scores, and finally 4) independence of variables (Field 2009, p. 133). For that reason and based on the nature of the data, the researcher resorted to the Mann-Whitney U test, defined by Cohen, Manion & Morrison (2018, p. 794) as “the non-parametric equivalent of the t-test are the Mann-Whitney U test for two independent samples”.

The researcher then downloaded the data into a database spreadsheet and a set of tables for further analysis. Coding, grouping and cleaning the data obtained from the three question groups of the questionnaire (A, B, and C) in relation to RQ1 helped the researcher feed the data into SPSS and get the descriptive analysis results. The sets of figures, graphs and tables produced from the descriptive analysis were utilised by the researcher to integrate the results of the questionnaire into the research, in preparation for the document analysis and interviews.

3.8.2 Qualitative Data Analysis

The researcher then used some of the quantitatively collected data to plan for the qualitative follow-up phase. Creswell (2014, pp. 224–225) stated that “the quantitative results can not only inform the sampling procedure, but it can also point toward the types of qualitative questions to ask participants in the second phase”. The researcher’s qualitative-based questions (based on RQ2 and RQ3) were general and open- ended, but definitely based on the database formulated through the quantitative phase. The majority of qualitative data-analysis techniques rely on the standard and common data analysis techniques used in qualitative research, most notably content analysis and thematic coding (Creswell 2014).

Therefore, the major analysis technique the researcher adopted for the document analysis and interview answers was the interactive model of data collection and analysis, first proposed and explained by Miles and Huberman (1984, 1992, 1994), and then later developed and expanded on by Miles, Huberman and Saldaña (2014). This model of qualitative data analysis involves the collection and analysis of qualitative data results in the form of “an interactive, cyclical process” consisting of three steps: data reduction, data display and conclusion drawing and verification (Miles & Huberman 1994, p. 12). This model has been utilised by the majority of qualitative researchers, as well as those who adopted the mixed-method research design, where “careful data display (e.g., in graphics and diagrams) is an important element of data reduction and selection” (Fraenkel & Wallen 2015, p. 648). It involves the simplifying, summarising and abstracting of data in shorter written formats (data reduction), putting the reduced data in an

on the reduced and displayed data, with verification being the final step through which the researcher tests the meaning emerging from the data (drawing conclusions) (Miles & Huberman 1994). For both document analysis and interviews, the researcher utilised the interactive model by applying the three steps explained above as essential parts of the thematic analysis strategies of coding and categorising.

The reason the researcher followed this model is the fact that it is the most commonly used model by researchers and the most quoted one in the literature. It is comprehensive in the sense that it covers all areas of thematic analysis and suits the theoretical and conceptual frameworks of the study. The researcher found it very convenient to follow this flow of qualitative data analysis once the quantitative data had been obtained and analysed fully through descriptive statistical analysis. Through this model, the collection and thematic analysis of the qualitative data in relation to the faculty members’ and academic administrators’ ERM perceptions were built up more logically and their questions were more focused and informative. The results the researcher obtained from the follow-up qualitative data will be interpreted in a dedicated discussion section. This interpretation follows the pattern of first reporting on the first-phase quantitative results, and then the second-phase qualitative results. In this pattern of interpretation, the qualitative findings including the document analysis themes will help to explain the quantitative results. In this sense, the researcher avoids merging the two databases (Creswell 2014), as he argues it will create confusion and deprive the follow-up phase of its value and significance. The main objective of this interpretation form is to introduce the document analysis themes and other qualitative data as a support that adds more depth and insight to the quantitative results. Finally, in the discussion section, dedicated to the interpretation of both phases’ data, the researcher will explain in what way the qualitative results support and expand on or explain the quantitative results.

For data reduction and display, being the two major components of the interactive model, thematic content analysis techniques were used in both the document analysis and interview stages. As the name implies, content analysis is suitable for this type of qualitative instruments since it “enables the researcher to study the human behaviour in an indirect way”, as well as being “the study of the usually, but not necessarily, written contents of a communication” (Fraenkel & Wallen 2015, p. 472). The researcher opted for content analysis also because it entails a data analysis technique that can be easily used in conjunction with other data analysis techniques (Fraenkel & Wallen 2015). The researcher opted for the thematic coding procedure followed in qualitative research by using the NVivo application (Software Version 12). In using this method of data analysis, the researcher started by preparing and organising the selected data for analysis, then moving to reading and reflecting on the data, and finally coding and

categorising the data into thematic “bracketing chunks” based on the language used by the participants (Creswell 2014, pp. 197–198) with the aid of NVivo 12. Saldana (2009, p. 3) defined coding in qualitative inquiry as “a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data”. The data considered in this type of analysis may consist of “interview transcripts, participants observation, field notes, journals, documents, literature, artifacts, photographs, video, websites, email correspondence, and so on” (Saldana 2009, p. 3). In this sense, the results of the document analysis and interviews were analysed and displayed with the aim of answering RQ2 and RQ3, which are related to the current ERM policies and practices adopted in the UAE HEIs, and how the academic administrators’ and faculty members’ responses regarding the implemented ERM practices help propose a workable set of guidelines for more effective ERM framework.

3.8.3 Summary of Data Analysis Techniques

In summary, since the researcher selected the explanatory mixed-method study design in the data analysis of both the quantitative and the qualitative databases that are interpreted separately, for the quantitative data obtained through questionnaires, descriptive statistical data analysis was used; for the document analysis, the content analyses and thematic coding techniques were used; and finally for the interviews, the thematic analysis strategies of coding and categorising were employed.

Table 3.7 shows the different data analysis techniques the researcher adopted in the study.

Table 3.7 – Summary of Data Collection and Data Analysis Techniques

Study Stage Data Collection Method Data Analysis

Survey – Structured

Questionnaire (RQ1) Statistical: Descriptive

Semi-Structured Interviews (RQ1)

The Interactive Model:

Thematic Analysis:

Coding & Categorising

Document Analysis (RQ2)

The Interactive Model:

Content Analysis:

Thematic Coding &

Categorising Quantitative Data Collection

(1)

Qualitative Data Collection (2)

Qualitative Data Collection (1)

Semi-Structured Interviews (RQ3)

The Interactive Model:

Thematic Analysis:

Coding & Categorising

Connecting and interpreting the quantitative and qualitative

results

Discussion of results and findings

Garis besar

Dokumen terkait