Maree (2012) defines sampling as a selection of the population for study from a particular group of people. The sampling method can either be probability (random sample) or non- probability (purposive sample). Sampling in this study used purposive and convenience with a specific purpose of exploring the EFAL knowledge base FP teachers bring to ACT (Maree, 2012).
3.6.1 Sampling methods for the test
In a total of 98 students who were enrolled on the FP ACT programme in 2013, 86 wrote the test. The other 12 students did not write because they were absent on the date the test was written. Students registered for the Advanced Certificate in Teaching (ACT) programme are the sample for this study.
46
Qualitative approach was adopted with purposive sampling. Maree (2012) explains purposive sampling as popular in qualitative research as participants are chosen because of certain characteristics with the clear purpose of getting rich data (see also Cohen et al., 2011).
Table 2: Biographical details of the sample regarding grades taught and teaching experience.
N = 86
Grades taught
Percentage Years of teaching Percentage
Grade R 48.2 Less than 5 10.0
Grade 1 22.4 6-10 56.3
Grade 2 9.4 11-15 27.5
Grade 3 15.3 16+ 6.2
Multi-grade 4.7
Total 100% Total 100%
A total of 86 Foundation Phase teachers completed the test. Additionally, the table above represents the teaching experience of the teachers in the sample for this study. Most participants (56%) had teaching experience of between six and ten years. More than a quarter of the sample had eleven to fifteen years experience (27.5 %). Only 10% of the sample had less than five years experience. Experienced teachers with more than fifteen years of experience make up a small proportion of the sample (6.2%). The knowledge base the teachers brought to the ACT programme varied because of the different experiences of participants and the grades that they taught. It can be observed that the sample did not have much experience of teaching Grade 1 to 3, as a bigger percentage of participants had Grade R experience. The knowledge base for the test is mostly that of Grade 1 to 3.
The ACT programme runs tutorial sessions in a range of learning centres such as Ladysmith, Pietermaritzburg and Edgewood campus. All ACT students in different centres were sampled. They wrote the tests in the centres where ACT programmes were run during the contact sessions in the first semester. The knowledge assessment test was designed to measure the kind of professional knowledge that Foundation Phase teachers bring to the ACT programme.
47
The importance of tests is that respondents are not passive data providers, but subjects of research. Testing aims to measure aspects such as ability, aptitude, attitude, achievement, competence, diagnostics and personality (Cohen et al., 2011).
Before tests were written by the bigger group, a pilot study in a local school was conducted.
The study was done for the purpose of checking ambiguity, redundancies, length and timing of the questions asked (Cohen et al., 2011).
On the day of writing the test, the purpose of the test was explained: that it was to check the knowledge teachers bring to the study. The students were made aware that the tests would not have an impact on their studies. They were given an opportunity to get the clarity needed on questions from the test. Teachers signed consent forms.
3.6.2. Sampling for interviews
For the interviews, five participants who taught in rural and urban schools were sampled from the bigger sample, as it would be cumbersome and costly to have many interviewees because participants were transported to the interview site which was at UKZN. Interviews were done over two days and each participant was interviewed for 30 minutes. Purposive and convenience sampling was used to select the five participants for interviews from students enrolled in the ACT programme. From the 86 participants, after data from the tests was analysed, five participants were selected. Three were rural teachers and two were urban teachers. The sample considered those participants whose test answers could not be understood to get more clarity from them. The Pietermaritzburg centre was used to select the participants for interviews, as it was the most convenient. However, this type of sampling is not representative (Maree, 2012).
Turner (2012) asserts that interviews are conducted as in-depth information received from participant’s viewpoints and experiences on a particular topic (interview protocol). At the same time, information in interviews is well rounded and thick (Creswell, 2007; Daymon &
Holloway, 2002). Another advantage of interviews is that nonverbal, verbal, spoken and heard multi-sensory channels are observed (Cohen et al., 2011). During the interviews for the study, it was possible to see from the body language how the participants felt about some of the questions asked.
48
Gall and Borg (2003) highlight three formats for interviews: Informal conversation interviews, general guide approach, and standardised open-ended interviews. Similarly, Maree (2012), in agreement with Gall and Borg (2003), refers to interviews as open-ended, semi-structured and structured. Additionally, Neuman (1997) highlights types of interviews which can be telephone interviews and face to face interviews. Correspondingly, Creswell (2007) talks of face to face interviews, one on one, in person interviews, telephone interviews, focus group interviews and email internet interviews.
Cohen et al. (2011) concurs with Maree (2012) and Gall and Borg (2003) that, in an interview the conversation cannot be like a day to day conversation, but it is structured in questions asked by the interviewer. These questions are specific to the topic studied. For the purpose of this study, questions asked from interviewees were based on content knowledge, pedagogical content knowledge and general pedagogical knowledge that teachers bring to the ACT programme.
In this study, semi-structured interview questions were designed (See Appendix 2). Cohen et al. (2011) claim that the purpose of semi-structured questions is to give an opportunity to probe more on responses given. In this study semi structured questions are used to get a better understanding of the knowledge base teachers have in teaching First Additional Language.
Face to face, one on one interviews were used for the five interviewees. These interviews emanated from teachers’ responses on the language test questions from the bigger test.