Chapter 2: Literature Review
3.6 Phase 2: Assessment of existing policies
3.6.4 Phase 2 data collection
3.6.4.1 Policy Delphi Method
Policy Delphi represents a significant departure from the original Delphi in that it seeks to generate as much opinion as possible on the major policy issue (Linstone & Turoff, 1975) and not consensus. There are three types of Delphi methods – classical, decision making and policy Delphi (Franklin & Hall, 2007). The classical Delphi is used to generate facts about a specific situation and the decision-making Delphi is used to support collective decision making (Franklin & Hall, 2007). The classical Delphi and the decision-making
84
Delphi seek consensus from participants and this is the major difference between them and Policy Delphi (Franklin & Hall, 2007; Linstone & Turoff, 1975).
The Policy Delphi is a tool for the analysis of policy issues and not for decision making (Linstone & Turoff, 1975) – it is premised on wanting as many views as possible and is a methodical way of comparing these views and information relating to a specific policy area, which in the current study is access to health sciences education in universities in the context the South African government`s commitment to transformation of higher
education. Policy Delphi tries to ensure that all possible options are put forward by
informed advocates, for consideration and was fitting for use in this study to glean as much opinion as possible on the implementation of access policy in higher education. Varied opinion from participants who are involved with admission policy at the universities was sought in order to develop guidelines on access health sciences education in universities in South Africa. This opinion was sought across different categories of staff both academic staff as well as administrative staff and across all levels of seniority.
The questionnaire was generated from the qualitative data and e-mailed to a panel of expert participants who were asked to rate the questions and return their responses; this had occurred over two (2) rounds (Sim & Wright, 2000).
a. Number of rounds
The Policy Delphi method involves six (6) rounds which, according to Linstone & Turoff, 1975 (p.88) include:
1. Formulation of the issues 2. Exposing the options
3. Determining initial positions on the issues
85
4. Exploring and obtaining the reasons for disagreements 5. Evaluating the underlying reasons
6. Re-evaluating the options.
However it is important not to develop participant fatigue by having too many rounds so the literature reports variations between two and four rounds as being reasonable (Keeney et al., 2006; De Villiers, De Villiers & Kent, 2005) and most Policy Delphi`s aim to keep the rounds limited to three or four (Linstone & Turoff, 1975). This was achieved in the current study by using the information gleaned from the qualitative phase of the study to develop the initial range of categories and asking the participants to state their position and underlying assumptions on the second round (Linstone & Turoff, 1975). Participants were encouraged to add to the initial range of items, any that they felt should have been included but were not.
Policy Delphi seeks to get as many differing positions and the main for and against arguments for these positions (Turoff, 1975). In analysing the results of the rounds the researcher decided on positions that were supported by most participants, those that were cause for disagreement and those which were not important and could be discarded
(Turoff, 1975). Despite the fact that Policy Delphi does not seek consensus, it is important in deciding what to include in the guideline and what to leave out, that the researcher makes a decision on what she considers support for a factor and what is considered not important with regard to access to health sciences education in universities, not everything can be included into the guideline. Keeney et al. (2006) suggest that when deciding on a consensus rate one might consider the importance of the research topic – in a life and death issue one might look for 100% consensus but something less critical one might decide on a somewhat lower level of consensus. There is little in the literature to guide researchers
86
using the Delphi Technique, on what consensus level to set and Keeney et al. (2006) suggest it is good practice to set a consensus level before data collection starts. These same authors do recognize that if the consensus level is set too low, for example at 51%, then those who fall into the 49% category could be disgruntled and it may be hard to justify the results of the Policy Delphi. For these reasons a consensus rate of 75% was selected, for the current study, as indicating importance for access to health sciences education in universities in South Africa and 20% for positions deemed unimportant to include in the guidelines. This rate was decided on before the research began (Keeney et al, 2006). The selection of a consensus percentage was crucial as this determined what were considered common positions and what were considered not important and could therefore be
discarded during the development of the guidelines. All positions are reported in the final report which will enable users of the research to understand the varying positions and the importance of some of the positions adopted.
b. Enhancing response rates
Poor response rates are a feature of quantitative questionnaire research and therefore it is good practice to have some strategies for enhancing respondent participation (Keeney et al., 2006; De Villiers, et al., 2005; McKenna, Hasson & Smith, 2002). The length and complexity of the questionnaire also influences response rates (De Villiers et al., 2005;
Bowling, 2005). In this study the researcher used a variety of ways to enhance
participation. The questionnaire was kept to 25 questions with tick box answers and space available for qualitative responses which made the questionnaire easy to complete in a short time. The questionnaire was made available online through SurveyMonkey™ and so delivered directly to the participants e-mail box. This allowed for convenience and ease in answering – no paper, envelopes or post boxes. Two follow up reminders were sent to
87
individuals, 2 weeks apart, via e mail, through the SurveyMonkey™ facility at the time of each round of questionnaires.
c. Selecting the experts
Purposive sampling of participants who were thought to have expert knowledge in the area of access to health sciences education in universities was undertaken. “Experts” are
defined as “specialists in their field” (Goodman, 1987), individuals who are
“knowledgeable and/or influential” (Green et al., 1999; Lemmer, 1998; White, 1991) and as informed individuals (McIlrath, Keeney, McKenna & McLaughlin, 2009). The entire population of the below mentioned categories of employees in eight (8) universities offering Health Science education in South Africa were invited to participate in the study.
Members of the panel of experts had one or more of the following criteria:
Table 3.2 Panel membership for Policy Delphi methods
Dean of a Faculty of Health Sciences in a university or equivalent position involving Health Sciences.
Dean of students in universities Registrars
Extended programme officers
Admissions Officer/Recruitment Officer in universities Recruitment/School Liaison Officers in universities Financial Aid Officer or equivalent in universities
Heads of Departments/Schools within Health Science Faculties at universities Other person deemed to have expertise in the area of access to universities, identified in the snowball sampling phase of the study.
88 d. Estimation of timeframe
It was estimated that this phase of the study would take approximately six (6) months to complete. Time is generally underestimated when using the Delphi method and it is estimated that each round takes approximately 8 weeks (Keeney et al., 2006). Sufficient time is needed to develop the questions, distribute the questionnaires, get them back, analyse the data, re-develop the questions for the next round and so it goes on until no new positions are emerging.
3.6.5 Phase 2 data analysis
The questionnaire was analysed in rounds – the results of the first round informing the second round and so on.
Frequencies and descriptive statistics were computed on the data. Statistics such as percentages, measures of central tendency, variance and reliability alpha were computed.
Computer software, SPSS Version 21, was utilised for the analysis of the quantitative data.
Both qualitative and quantitative data was utilized for sequential qualitative-quantitative analysis using exploratory data-analytical techniques. Mixed analysis is the term given to analysing data in mixed research (Onwuegbuzie & Combs, 2011). The model for the mixed methods data analysis process suggested by Onwuegbuzie and Teddlie (2003) has been used in this study. This model utilizes seven (7) stages which are sequential but not necessarily linear (Onwuegbuzie & Teddlie, 2003).
The seven stages were:
Data reduction
Data display
Data transformation
89
Data correlation
Data consolidation
Data comparison and
Data integration (Onwuegbuzie & Teddlie, 2003).