3.3 Research Methodology
3.3.5 Data Analysis
Data analysis comprised two components, namely quantitative and qualitative as required by the Mixed Methods Research Design employed and the data collection instruments indicated above.
3.3.5.1 Quantitative Component: Statistical Methodology
IBM PASW version 19.0 was used to capture and analyse the data. A p value <0.05 was considered as statistically significant. The technique utilized in this analysis was analysis of variance (ANOVA) to see if means are different, which was mainly utilised for the analysis in chapter four. This is useful in testing the differences between means for different variables of interest. Thus, if significant differences (close to p-value=0.050) were picked up, a follow- up test (Duncan’s multiple range test for differences of means) was performed to determine which means are different and to what extent they are different. Duncan’s new multiple range test (MRT) is a multiple comparison test (or pairwise comparisons) used to ascertain whether three or more means are significantly different in an analysis of variance (see Kirk, 1995).
Duncan's MRT is especially protective against false negative (Type II) error at the expense of having a greater risk of making false positive (Type I) errors other methods (Bonferroni, Scheffe, Turkey etc) are prone to (see Dallal, 2001;Steel et al., 1997).The choice of Duncan's MRT was based on the ranking of multiple comparison methods by conservatism (Dallal, 2001). It is a test that does not protect the experiment wise α level - p-value=0.05 (Dallal 2001; Steel et al., 1997).
Independent samples t-tests were used to compare mean GPA between two independent groups (e.g., male versus female students). This technique was utilised mainly in chapter five of this study.
Pearson’s chi square tests were used to compare categorical variables between two or more groups (see chapter five). Mann-Whitney tests were used to compare the median number of courses failed between those with and without a father, stratified by year cohort. Mann- Whitney tests were also used when two independent groups were being compared with respect to median income and number of earners (refer to chapter five).
In chapter four, most of my statistical analyses did not yield significant positive relationships between or amongst variables because of the smallness of the sample.
3.3.5.2 Qualitative Component: The Explication
15of Interview Data Procedures
16in this Study
This study used a mixed-methods approach to elucidating the conditions of disadvantaged students. However, the interviews of 8 students were used to eliciting accounts from participants about their lived experience during their high school and university life.
The interview schedule contained 17 questions categorised under six sub-headings namely:
pre-university experience (questions 1, 2, 3, and 4); first year experience at university (5, 6, 7, and 8); current living and material conditions at university (9, and 10); the teaching and learning environment at university (11, 12); the spending habits of students (13, 14); and career aspirations after graduation (15, 16, 17). It was designed after reviewing the salient findings from the questionnaire survey. Next I delineate interviewprocedures followed in this study.
Transcription
The first step in explicating my interview data was to have the digital recordings of interviews transcribed. This included the literal statements and also noting important non- verbal and para-linguistic communications. As I read the transcripts I noted units of general meaning on the right margins of the transcripts, which were later coded into NVIVO in free nodes.
15I am uncomfortable with using the term ‘analysing’ when it comes to explicating interview data because ‘analysing’
etymologically means ‘breaking into parts’ which is dangerous because the context of the whole (gestalt) get lost; thus the researcher is tempted to speak for the data rather than the data speaking for itself. In this study borrowing from Giorgi, I have employed the term explication which means an investigation of the constituents of a phenomenon while always maintaining the context of the whole. That said, my focus is on the lived experiences of students from disadvantaged schools.
16These procedures should not be viewed sui generis, as in actual fact, they do not exist, but they were created here for technical purposes to give the reader a picture of how I went about explicating my interview data. The explication procedures applied should be dictated by the phenomenon under study. This perspective dictated the procedure used in this study.
Delineating Units of general meaning – Free Nodes in NVIVO
In this study this process took place at nodes, specifically free nodes. Derivative from the name, these are nodes ‘free’ of organisation. In other words, they are containers of ‘loose’
ideas which are not conceptually related to other nodes in my project (Bazeley, 2007; QRS, 2008). Immediately after transcribing the interviews I generated themes by reading through the transcripts, and penning them in the margins of each transcript before typing them and including them in the main transcript for coding in NVIVO. The main goals of coding are to identify the categories for thinking about your data and to gather in a category all the data about it (QRS, 2008). In this study, nodes became containers or places to store and code data of individual interviews. Further they contained evidence within my sources supporting them.
Thus, according to QSR (2008), “creating nodes and exploring nodes is a way to think ‘up’
from the data and arrive at a higher level of explanations and accounts”. Further, at free nodes general ideas were gathered from individual interview transcripts, and also following the structure of interview schedule (see Appendix B: Interview schedule).
At this stage the meanings are those experienced and described by the participants, irrespective of whether they are later found to be essential, contextual or tangential to the structure of the experience (Hycner, 1985). The end-product is called a unit of general meaning which is defined as:
“... those words, phrases, non-verbal or para-linguistic communications which express a unique and coherent meaning (irrespective of the research question)”
(Hycner, 1985).
At free nodes all cases and general themes about cases for example, 4th year, 1st Year, 3rdyear and 2nd year were coded after importing the transcriptions into NVIVO sources called
internals. Here I had to gather all the information about each case, say for instance about their career aspirations after graduation; living and material conditions at university; the teaching and learning environment at university; pre-university experience; and their first year experience at university, organized in alignment with the interview schedule attached as Appendix B. Coding in NVIVO is the process of bringing together passages in your data that seem to exemplify an idea or concept represented in the project as nodes (see QRS, 2008).
Clustering units of relevant meaning – At Tree Nodes
From the general display of coding data I moved to a more structured ‘logical’ representation of data, the tree nodes. Tree Nodes were used in this study to represent the concepts and categories in my project which were logically related as they can be organized in a hierarchical structure (i.e. category, subcategory) (see QSR, 2008; Bazeley, 2007; Creswell, 2003; Welsh 2002). After clustering themes of relevant meaning in tree nodes in NVIVO these themes were then explicated and illuminated textually. Thus, all these themes are directly derived from the data (transcriptions) as also reflected in the categories in nodes.
They then allow textual data to speak to these themes as they emerged from the data; that is allowing the data to speak for itself. The textual narratives (or quotes from participants) give these themes their textual content and context as they emerge.
Contextualization of themes
After the general and unique themes had been clarified, it was helpful for this researcher to situate these themes back within the overall contexts or horizons from which they emerged (Hycner, 1976). As Giorgi (1971) states: "...the horizon is essential for the understanding of the phenomenon because the role that the phenomenon plays within the context, even if it is only implicitly recognized, is one of the determiners of the meaning of the phenomenon”.
This meant going back to the source from which they emerged, the transcriptions, and looking at the ‘ordering’ provided by interview schedule.
Determining themes from Clusters of meaning and Delineating units of meaning relevant to the research question – Beyond Nodes
This was a critical phase in the explanation of data as it addresses the research question. Once the units of general meaning had been established and the contextualization of themes considered, the researcher was ready to address the research question to them. The researcher addressed the research question to the units of general meaning to ascertain whether what the participants had said responds to and illuminates the research question or the objectives of the study (Hycner, 1985). The researcher interrogated all the clusters of meaning to determine if there were one or more central themes which express the essence of these clusters (and that portion of the transcript), which culminated in the textual explications displayed in chapter six and also explained in the immediately preceding step. This procedure addressed more of
the gestalt of the relevant segment and the clusters of meaning. As shown in chapter 6, the clusters of meaning are presented textually by statements from participants. It should also be noted that the statements which were clearly irrelevant to the phenomenon being studied were not recorded. Again, where there was ambiguity or uncertainty as to whether a general unit of meaning is relevant to the research question, I ‘erred’ by including them.
I also used the model explorer tool in Nvivo, a tool that is useful for mapping out diagrammatically how the themes relate to each other (see Welsh, 2002) to model my project, culminating in a graphical display of contours of disadvantage and academic progress presented in histograms in chapter six.