Data analysis is described as techniques used to search and categorise useful data from transcriptions and to explore the relationships among the resulting categories (Creswell, 2009; Merriam, 2009; Check & Schutt, 2012). Data analysis involves making sense of the data generated from the field work (Vithal & Jansen, 2006; Hesse-Biber & Leavy, 2011). In the same vein Cohen et al. (2011) describe data analysis as making sense of data in terms of what the participants comment about the situation at hand, identifying patterns, themes, categories and commonalities in the data. This entails that data is elicited in a raw and junk form (see Fig. 4.1). Then analysis is a process of searching, summarising and giving meaning to the data in relation to the problem that is being studied. Data needs to be classified, categorised and interpreted so that it makes sense to the readers. Since I used two methods
(interviews and documentary reviews) to elicit data, I tried by all means to use similar methods of analysing the data. Therefore, below are the sections that explain how interview and documentary data was analysed.
4.10.1 Analysis of interview data
In this study I adopted the four step approach to data analysis described by Creswell (2009) to analyse documentary and interview data. This general analysis procedure is demonstrated in Fig. 4.1 below.
Figure: 4. 1: Steps of qualitative data analysis (adapted from Creswell, 2009, p.185)
Firstly, I patiently and perfectly recorded all important information of the recorded interview data (which is raw) from the voice-recorder to a text format (Creswell, 2009; Remler & Van Ryzin, 2011; Struwig & Stead, 2013). I preferred to transcribe the data myself because, repeated listening to the voice-recorder made me familiar with my data. Secondly, I repeated reading these transcripts as well as listening to (Green et al., 2007; Struwig & Stead, 2013) recordings of the interviews in order to make sure I had accurately transcribed what were recorded. I wanted to be certain that the original taste of the data was not distorted in this initial phase. The re-reading and annotation of transcripts, and making preliminary observations helped me to get the feel of the data. In actual fact, by reading through the transcribed data, I got to understand the general meaning of the data, and this helped me to identify relevant codes and themes (Creswell, 2009; Merriam, 2009). This brought clarity to
the part I played and parts played by the participants; providing the foundation (Green, 2007;
Hesse-Biber & Leavy, 2011; Creswell, 2013) for developing one traceable story from different stories given by the participants, into a clearer picture of what the participants wanted to comment about resource demand in remote rural ECD schools.
Thirdly, I generated codes and themes from the transcript. Coding can be defined as the process of arranging raw data into pieces or sections of transcript before attaching meaning to data (Creswell, 2009; Merriam, 2009). With the same line of thought Green (2007) and Hesse-Biber and Leavy (2011) define codes as descriptive tags that are given to pieces or sections of the transcript. Here I noted single words or short phrases in the transcripts and applied labels. Coding is more than inserting labels to the pieces and sections of the transcript (Struwig & Stead, 2013). These segments were labelled with terms that described the data on different levels of abstraction. I used computer software marking the text on line. I just needed to be clear about what I was asking of the data. The coding procedure was iterative in nature; I used pre-defined coding and emerging categories. I performed individual script analysis, and compared the themes in different scripts then across scripts, consistently testing the relationship between the data and my interpretations (Green, 2007).
A thorough cross-examination of the data was done to classify the order in which research participants responded to aspects of resource demand. I further worked through the transcripts, inserting codes, and refining the meaning of each code as necessary to discover more information about the study topic. This compelled me to make a revisit to the work that I had coded previously, to ascertain whether these codes still apply. This process of coding was iterating and non-linear. It involved a forward and backward movement through the transcripts, giving special attention to the research question and considering the theoretical concepts. After coding all the data, I „cut and pasted‟ codes into piles by code, thus taking data extracts out of their original context putting them together with other examples of data on the same topic and looking for patterns across the data, the themes. The patterns and relationships I found under these themes formed the basis of my report.
Fourthly, I had to deduce and understand the implication of the identified themes. According to Creswell (2009), qualitative research is interpretative by nature. After this analysis has taken place, the researcher‟s task is to answer the 'so what' question and to offer explanations
for the groups that have emerged (Struwig & Stead, 2013). After having transcribed and presented the interview data, I interpreted the meanings of the coded data against the conditions of my study, its background and experiences and compared these findings to information brought together from the literature or theories (Green, et al., 2007; Green, 2007;
Creswell, 2009). This ability to explain social phenomena helped to ensure generalisation of this study to other settings, since it exposes better evidence about the topic.
4.10.2 Documents analysis
Document analysis is a form of qualitative research method in which I interpreted the EMIS documents from the study site schools to give voice and meaning to the issue of resource demand for ECD (Burke & Christensen, 2008). I incorporated coding of content into themes similar to how interview transcripts were done. The same was done to the documentary data;
I also transcribed it separately from the interview data and study site independently. The validation of accuracy or trustworthiness (shown on Fig. 4. 1) of the research findings, is continuous and done in different stages of research process as given below: