• Tidak ada hasil yang ditemukan

In Chapter 1, I outlined the experiences, assumptions, and positions I brought into this study so that its findings could be viewed with these frameworks in mind. For example, I investigated fee- paying and no-fee schools against the background that I believe that resources contribute to differences in learner success in mathematics in general and in Euclidean geometry in particular.

Research methodology Consideration of rigour and limitations

That said, efforts were not spared to ensure that the findings in this study were rooted in evidence and ‘worth paying attention to’ (Lincoln & Guba, 1985, p. 290). In the course of the research process, I used the terms “reliability” and “validity” to describe attempts to ensure that the results in quantitative segment of the study were rigorously achieved. However, in considering the quality of the findings of the qualitative segment of the study, I adopted Golafshani’s (2003) approach in using the term “trustworthiness” to encompass both quantitative notions of reliability and validity.

The term “trustworthiness” is used in this context to refer to the conceptual soundness from which the value of inferences made in this qualitative segment of the study must be judged (Marshall & Rossman, 1995). The reason for this decision is that these two terms, reliability and validity, were problematic for qualitative research (Guba & Lincoln, 1994). For instance, Cohen, Manion, and Morrison (2011) point out that concerns of replicability or uniformity were meaningless given that different researchers studying a single setting may come up with very different findings because reality is multilayered from qualitative research perspectives.

4.12.1 Evaluation of the quality of quantitative findings

4.12.1.1 Rigour for the LFUP instrument

The purpose of performing factor analysis was to validate the factor structure that is proposed in Shongwe and Mudaly’s (2017) study. When the quality of each of the items for each scale was evaluated, the validity and reliability of the instrument were determined. In quantitative research, reliability refers to the degree to which an instrument yields consistent findings while validity refers to the degree to which an instrument accurately measures what it is intended to measure.

Reliability of the LFUP scale was determined through measuring internal consistency of subscales and the global scale through Cronbach’s alpha.

However, since “understanding of the functions of proof” is a latent variable and therefore not directly observable, the length of the LFUP scale was adequate, and also given the sufficiently large sample anticipated in this study, an alpha of .50 or above was tolerated (Kline, 2011). The item-total statistics helped in diagnosing if there were problems with the items; for instance, if

Research methodology Consideration of rigour and limitations

there is an item that needed to be reverse-coded because it was negative, or deleted to improve reliability, or if there is a negative correlation. Also, high positive correlations were an indication of reliability of the LFUP questionnaire. By item-total is meant the correlation between each item and the overall score of the scale used as an indication of the internal consistency or homogeneity of the scale, suggesting how far each item contributes to the overall theme being measured (McDowell, 2006).

If the findings of this research study were to be helpful, determining the reliability and validity of the LFUP instrument was needed to demonstrate and communicate the rigour of this research process and the trustworthiness of research findings (Roberts, Priest, & Traynor, 2006).

Validity of the LFUP scale was established through consideration of three fundamental elements:

content validity, criterion-related validity, and construct validity (Long & Johnson, 2000;

Saunders, Lewis, & Thornhill, 2012). In addition, face validity of the LFUP scale was also established. Whereas face validity refers to the indication whether, at face value, the questionnaire appears to be assessing the desired qualities, content validity refers to making a judgement as to whether an instrument seems to adequately sample all the relevant or important content or domains (McMillan & Schumacher, 2010). Content validity, which refers to whether or not the content of the items the LFUP questionnaire is appropriate to measure learners’ functional understanding of proof is determined through the use of theory on functional understanding of proof.

To establish face and validity, the participants were asked to comment on how the instrument looked to them. However, it is important to obtain expert comments on content validity.

To that end, five mathematics teacher educators were asked to judge the content of the instrument.

Criterion validity which, like content validity, depends on theory (Muijs, 2004), was determined by considering argumentation ability as being theoretically related to and a predictor of functional understanding of proof. Specifically, participants’ scores on the LFUP questionnaire were expected to be related to those they obtained in the AFEG questionnaire. In addition, the theory on argumentation led to the expectation that learners whose self-efficacy levels were high would hold informed functional understanding of proof than those that struggle to appreciate the functions of proof. More than three decades ago, Bandura (1977) theorised that a potent influence on learner

Research methodology Consideration of rigour and limitations

behaviour is the beliefs that they hold about their capabilities. Briefly, learners are more likely to have an incentive to learn if they believe that they can succeed in performing a task; they make effort and persist in the face of difficulties. The scale on self-efficacy (Appendix B2) was important in improving the validity of the results.

In sum, establishing criterion validity required knowledge of theory relating to functional understanding of proof so that I could decide which independent variable can be used as a predictor variable. To do this, I needed first of all to collect data on those factors (functions of proof) from the same respondents to whom the LFUP instrument was administered, and secondly to statistically measure relationships among factors using multiple regression, specifically correlation coefficients.

4.12.1.2 Rigour for the AFEG instrument

The participants were required to complete a written argumentation questionnaire consistent with the Principles and Standards for School Mathematics’ (National Council of Teachers of Mathematics [NCTM], 2000) call for learners to develop mathematical argument ‘in written forms that would be acceptable to professional mathematicians’ (p. 58). The rationale behind the use of writing frame was that they seem to help in improving the quality of learners’ arguments as they present their responses in a structured written form (Sepeng, 2013). The task was deemed appropriate for Grade 11 learners since, at Level 3 of the van Hiele model, they should have begun making informal arguments to justify their conclusions.

A distinguishing feature of this task was that learners had to depend on their observation of the data to make a claim; this process reflected the inductive nature of argumentation. Working inductively could help learners to appreciate the genesis of the objects of mathematics. It is important to note that the examination of learners’ geometric knowledge inherent in the task is measured elsewhere (Shongwe, 2019). Osborne’s et al.’s (2004) argumentation frame employed in this study has been used in many countries including South Africa (for example, Lubben, Sadeck, Scholtz, and Braund, 2010). As a consequence, the AFEG instrument was deemed valid and reliable.

Research methodology Consideration of rigour and limitations

4.12.2 Evaluation of the quality of naturalistic inquiry findings

The case study approach adopted for the qualitative phase of this study is regarded as, in Lincoln and Guba’s (1985) terms, a naturalistic inquiry on the basis that I sought to explain why Presh N held the beliefs she held about functional understanding of proof from the perspectives of the participant in her natural setting (the classroom environment where she spent most of her time).

Irvine, Drew, and Sainsbury (2013) further provide useful insights into how interviewing in natural setting helps to avoid loss of nonverbal data. These authors point out that interviewing the participant in their natural setting not only facilitates the development and maintenance of a rapport but also provides the opportunity to observe cues such as intonations, facial expressions, levels of interest and attention, and body language during the interview. These nonverbal cues were noted in a reflective journal and used as additional data entered into the interview transcript.

As a consequence of conducting a case study in its natural setting, this rendered the research study not immune to the need to demonstrate and communicate the extent to which research findings were trustworthy (Roberts, Priest, & Traynor, 2006). The trustworthiness of qualitative findings directly relates to the methodological and analytical processes (Daytner, 2006). The following techniques served as safeguards for accomplishing trustworthiness of inferences: cross- checking in methodological triangulation, maintaining an audit trail, and member checking (Bowen, 2009).

Methodological triangulation, defined as a ‘method of cross-checking data from multiple sources to search for regularities in the research data’ (O'Donoghue & Punch, 2003, p. 78) was helpful in several ways. For instance, I relied on data gathered through semistructured interview, survey data, and document (proof-related) analysis for clues of corroboration and forming themes or categories. Further, it is through triangulation that I attempted to reduce the effect of researcher bias and misrepresentation of views by participant (Cohen, Manion, & Morrison, 2011; Gunawan, 2015). In an attempt to further improve the trustworthiness of coding of the data, I used the principle of multiple coding in which the entire interview transcript was sent to an independent researcher, a fellow doctoral student, to cross check the coding and interpretation of the data to overcome researcher bias. The rationale for cross checking was the interest to gain insight into