• Tidak ada hasil yang ditemukan

Quantitative Instruments – Questionnaire

3.6 Data Collection Instruments

3.6.2 Quantitative Instruments – Questionnaire

Since this study starts with the quantitative phase with a ‘high priority’ (see Figure 3.2), the survey data collection design is the major research design adopted by the researcher to obtain cross-sectional and at the same time descriptive results and data from the selected population of academic administrators and faculty members. Descriptive surveys in the form of close-ended questionnaires are common in educational research where the intention of the researcher is to simply describe the characteristics of a sample, particularly faculty members, instructors and administrators in charge of ERM in selected UAE HEIs, at a specific point in time (Mertens 2005; Creswell 2014; Cohen, Manion & Morrison 2018;

Saunders, Lewis & Thornhill 2019). The researcher mainly focused on the descriptive survey instrument, through a structured questionnaire, as the basic instrumentation to collect the quantitative data. As stated by Fraenkel and Wallen (2015, p. 21), “survey research involves describing the characteristics of a group by means of such instruments as interview questions, questionnaires, and tests”. The reason a questionnaire was used in support of the quantitative portion of the study in a cross-sectional manner is that a “survey will be conducted to determine whether the information found is more generalisable or specific to certain unique corporations” (Saunders, Lewis & Thornhill 2019, p. 115). Creswell (2014) strongly posited that answering questions through surveys (i.e., questionnaires) is the ideal way to obtain quantitative results.

Throughout the initial phase of the quantitative study, the researcher initially aimed to collect results from 140 survey respondents. However, due to the limitations identified in detail in a later section of this thesis, the number of possible and confirmed survey responses went down to 101. The survey was based on a questionnaire conducted on the same topic in several USA universities and administered to 140 respondents (Lundquist 2015). Even though the tool proved to be valid and reliable, the researcher revised some of questions and ran them through a piloting test process with some major respondents in order to enhance the validity and reliability of the survey tool and obtain better results suitable to the UAE higher education context.

How the questions of the survey questionnaire were determined by the researcher is accounted to by two factors. One is reliance on previous literature in the field where the overall structure and some questions of the questionnaire were inspired by studies such as Lundquist (2015), Deck (2015), and Eryilmaz (2018). The second factor is that the research questions and objectives of the study contributed to determining the nature of questions as well as their structure. In practical terms, this was interpreted in the way the researcher structured the survey questions as follows: The structured questionnaire consisted of seven (n= 7) demographic questions and thirty-two (n= 32) major questions which were directed to 140 participants from the selected UAE HEIs, designed in such a way as to group the questions in accordance with their thematic content and the two targeted groups of participants, as well as to relate their answers to the research purpose and questions. Survey Items 18 to 34 were survey-based and perception-centred statements of risk maturity testing in the context of a risk maturity model’s (RMM) adoption and utilisation (from initial to very mature), “developed based on a review of risk maturity models and using elements of ISO 31000 regarding culture and maturity to form the statements”

(Lundquist 2015, p. 71).

Survey Items 18 to 34 were directed to the respondents to select an answer from A (initial) to D (very mature) showing four different levels of maturity towards one aspect of the ERM process and effectiveness. Related to the first and major research question (RQ1), the questions of the questionnaire were set by the researcher to be in three interconnected groups based on their major thematic categories.

Group A questions (see Table 3.5) utilised the Likert-based style in an attempt to measure and focus on the faculty members’ and academic administrator participants’ perceptions of risks in their institutions, and how they are being identified, classified and managed in relation to their quality, accreditation and academic performance processes. The other part of the first group questions sought to get responses from the participants on the effectiveness of ERM implementation in their HIEs (ERM Adoption and

Implementation). Group B questions (see Table 3.5) were directed at the participants comprising of faculty members and instructors in the selected HEIs to seek their perceptions and responses on, as well as involvement in, the effectiveness of ERM adoption and implementation in their academic institution (Effectiveness of ERM Adoption and Implementation). Group C questions (see Table 3.5) were directed to the participants by way of seeking to obtain their perceptions and feedback on the already-implemented ERM policies and guidelines adopted in their institutions, and how effective they may be in relation to their academic institution (ERM Integration). The questionnaire was first piloted among a convenience- based distributed sample of one (n= 1) participant from each university to check their reliability and make enhancements and changes to the questions based on the respondents’ feedback and responses. The revised and finalised questionnaire was then administered online with the targeted respondents. Table 3.4 shows how the quantitative data collection process was performed by the researcher.

Table 3.4 – Quantitative Data Collection Process Step No. Description

Step 1 Drafted questionnaire based on previously tested research and major respondents’ feedback

Step 2 Piloted and tested the survey instrument

Step 3 Revised and refined the survey instrument based on the pilot test Step 4 Administered the online survey instrument

The survey questionnaire was run through the SurveyMonkey application and targeted 140 academic administrators and faculty members conveniently selected from their HEIs. The participants were selected on the basis of either their risk management responsibility at the HEIs or their knowledge and awareness. They were asked questions about their perceptions regarding the effectiveness of ERM implementation processes at their institutions, as Table 3.5 shows.

Table 3.5 –Responses Sought from the Survey Questions Survey Questions Respondent Perceptions Targeted

Group A Questions (ERM Adoption)

The participants’ perceptions of the nature of ERM adoption in their academic institution; also directed towards the participants’ knowledge and awareness of the steps taken at their institutions for the identification, implementation and evaluation of ERM practices

Group B Questions (Effectiveness of ERM Adoption,

- The participants’ perceptions of and involvement in the effectiveness of ERM adoption and implementation in their academic institution, and

Implementation &

Integration)

- testing the maturity level of the respondents’ HEIs regarding the application, implementation and integration of ERM framework and concepts

Group C Questions (ERM Integration)

The participants’ perceptions and feedback on the already-implemented ERM policies and guidelines adopted in their institutions, and how effective they may be in relation to their academic institutions

The responses from the survey participants were then turned into statistical data and analysed using descriptive statistical analyses and non-parametric data test procedures, by running them through the specialised statistics application instrument of SPSS, as will be further explained in Section 3.8.1.

Garis besar

Dokumen terkait