• Tidak ada hasil yang ditemukan

According to Benard (1994), data analysis refers to ‘searching for patterns and ideas that help to explain the existence of those patterns’ (p 360). Chilisa and Preece (2005) point out that ‘the purpose of analyzing and synthesizing data is to make sense out of disaggregated information, showing relationships, their root causes and possible solutions’ (p 56).

Although data collection and analysis are discussed under different headings in this study, the two are not completely distinct phases. Blanche et al. (2006) point out:

There is no clear point at which data collection stops and analysis begins. Rather, there is a gradual fading out of the one and a fading in of the other, so that you are mainly collecting data and towards the end you are mainly analyzing what you have collected (p 321).

In qualitative research there are reasons why data analysis cannot be left until the end of data collection. Cohen et al. (2000) add:

Qualitative research amasses huge amounts of data, and early analysis reduces the problem of data overload by selecting out significant features for future focus (p 147).

As a result, data collection and analysis occurred concurrently (Baxter & Jack, 2008) but at the beginning data collection was the main focus. When data collection gradually declined, data analysis became my main focus.

4.6.1 Narrative analysis

The main analysis technique that was used in this study was narrative analysis. This refers to ‘a family of methods for interpreting texts that have in common a storied form’

(Riessman, 2008). Although narrative analysts are a diverse people with different perspectives, most of them are in agreement that narrative analysis allows them to interrogate oral, written and visual data effectively.

Riessman (2008) proposes four models of narrative analysis, namely: thematic, structural, dialogic or performative and visual analysis. The two models deemed relevant for analyzing text and non-verbal communication are the thematic and visual analysis.

Thematic analysis puts emphasis on the content of data and the recurrent or underlying themes that exist within and across data.

In working towards thematic and visual analysis, the articulations of participants had to be transcribed. As this study adopted the notion of ‘verstehen’, it was critical to ensure that transcription did not become just a record of data, but ‘a record of social encounter’

(Cohen et al., 2000, p 281). One of the ways to do this was to consider the verbal communication contained in the tapes, but not neglecting visual and non-verbal aspects of the life history interview. According to Cohen et al. (2000), this helps to minimize decontextualisation of data from ‘the social, interactive, dynamic and fluid dimensions of their source’ (p 282). Riessman (2008) argues that words are only one form of communication. There are other forms, including gesture, body movement, sound, images and other aesthetic representations. This study did not ignore these visual and non-verbal aspects of data.

Transcribing data verbatim from audio recordings to a written format was a worthwhile experience. It gave me an opportunity to revisit data and familiarize myself with participants’ articulations, to make sense of and reflect on, the overall meaning (Cohen &

Manion, 1989).

Thematic analysis is preceded by content analysis. Nieuwenhuis (2006) defines content analysis as a ‘systematic approach to qualitative data analysis that identifies and summarizes message content’ (p 116). Moyo (2007) stresses that analyzing content employs checking the presence or repetition of certain words or phrases in texts in order to make inferences about the author of the text.

Moyo (2007) points out that the key characteristic of content analysis is coding. Coding means organizing different codes so that themes and patterns of behaviour of participants are identified (Aronson, 1994). Thereafter three kinds of qualitative data coding methods were used. Data was referred to on three occasions for three different purposes. The first occasion was open coding. According to Neuman (2000) this is the first stage in which the researcher scrutinizes the field notes, interview schedules or any other documents by focusing on actual data and assigns codes for themes. Flick (2006) refers to this stage as the crystallization and condensation of participants’ literal words.

The second occasion, according to Cohen et al., (2007), is called axial coding. This is the stage in which I re-looked at initial codes or preliminary concepts, to see if these could be rearranged or improved in accordance with the research questions. I re-examined the research questions and questions that had been used as prompts during the interview sessions, to be able to identify where links could be made. The whole process meant that codes were explored, their interrelationships examined and codes and categories were compared to existing theory (ibid).

The third and last occasion involved scanning of data and previous codes. This stage is called selective coding, because a researcher needs to look selectively for cases that illustrate themes and make comparisons and contrasts (Neuman, 2000) after all the data

coding had been done. Selective coding was crucial, as it gave me an opportunity to finalize the organization of themes and to confirm the accuracy of the coding.

These three stages of re-examination and reading transcribed data many times over, for purposes of familiarizing and immersing oneself in the content of data, were important. I believed that, besides familiarization and immersion, multiple readings helped to identify silences and gaps and thus enhanced coherence in a life story. Todorova (2007), in her study of experiences and lives of childless women in Bulgaria, confirms the above arguments, when she states that multiple readings of interview and transcribed scripts help to ‘unglue multiple voices of the said and the unsaid’ (p 232).

When themes had been analyzed, I examined transcribed data again, to write a narrative for each and every participant. Hitchcock and Hughes (1989) identify three modes of presenting a life history. The first one they call a ‘naturalistic’ first person life history, in which the life history is in the words of the participant. The second one is called the

‘thematically edited’ life history, in which the researcher arranges the life history into themes and headings ensuring that words of the participant are retained. This is confirmed by Cohen et al. (2000), when they argue that in this mode of presentation the participants’ words ‘are retained intact’ and are presented by the researcher in themes or headings in ‘a chapter-by-chapter format’ (p 166). They add that in life history a researcher needs to decide early enough how far he or she wants to ‘intrude upon assembled data’ (p 167).

In the third mode the feel and authenticity of the participants’ words are retained, but the researcher sifts, distils, edits and interprets these to write a story (Mthiyane, 2007). I oscillated between the second and the third modes of presentation. In writing up this thesis the life stories of the 12 participants were organized into themes and presented in Chapters 6 and 7. I used a combination of extensive accounts of the participants’ words and my own words as the researcher, to ensure that the original authentic meaning of data was preserved.

4.6.2 Documentary analysis

Bloor and Wood (2006) cite three approaches for analyzing data elicited from documents.

The first approach, called content analysis, describes the characteristics of the document’s content by examining ‘who says what, to whom and with what effect’ (Bloor

& Wood, 2006, p 58). I used some of Huberman’s (1993) themes as a lens to examine reactions and experiences of lecturers to some of the government directives related to reforms and how these have affected their personal and professional lives.

The second approach is the interpretive approach to documentary data. This approach explores the meaning within the content. The approach was deemed relevant in this study, because it helped to examine the way meaning was assigned by ‘authors and consumers’ of the document (Bloor & Wood, 2006, p 58).

Thirdly, documentary data was analyzed using the critical approach, which focuses on the relationship between the document and aspects of social structure, that is, class, social control and power. Using Elias’s (1970) concepts of power, function, figurations and habitus, this approach provided a useful framework for examining the role of official documents and how these regulate social order.