Proof has been widely studied but little attention has not only been paid to documenting learners' understanding of the functions of proof in mathematics but also to the examination of the relationship between functional understanding of proof and argumentation ability. There is likely to be no contestation to the view that no single explanation accounts for the low scholastic achievement in Euclidean geometry. That view notwithstanding, little systematic9 investigation of school and curriculum factors and the role they might play in shaping learners' understanding of and competencies in mathematical proof have been conducted (Healy & Hoyles, 2000).
The present study is intended to serve as an attempt to fill this void by building on the work of de Villiers (1990) who asserts that most learners’ problems with Euclidean geometry often lie with learners’ naïve understanding of the functions of proof in mathematics. Capturing understanding of proof from the perspectives of its functions in Euclidean geometry, aligned with
9 By systematic is meant “planned, ordered and public” investigation, following rules agreed upon by members of the qualitative research community (Shank, 2002).
Introduction to the study Significance of the study
argumentation ability, and the factors affecting learners’ belief about the function of proof, is significant for a plethora of reasons. The three major contributions of this research are its methodology, the baseline quantitative data gathered on LFUP, and the proposed model for understanding factors influencing learners’ functional understanding of proof.
1.2.1 Methodological significance
The vast majority of research has focused on designing intervention programmes (following learners over time and devoting extensive time to data collection) to teach proof relying either on pre-test-post-test designs or qualitative measures thus introducing strong evaluator bias.
Specifically, the original development of research methods and instruments is subject to sample size and sampling bias. For instance, the sample size required to provide reliable data is often not statistically determined. In addition, the post-test is usually completed by participants who are still enthusiastic about the experiment and the opportunity to learn somewhat differently; the realities of the environment have not dampened their enthusiasm. Nonetheless, as already mentioned, proof cannot be taught.
The present study advances the argument that only normed and validated methods can provide a scientific basis for addressing the problem from the perspective of focusing on the activities prior to formal proof construction. Hence, the sample for the quantitative phase of this study was randomly selected and the results factor analysed. In addition, rather than focus on one of the two major research paradigms traditionally used in education—positivist paradigm and the constructivist paradigm (McMillan & Schumacher, 2010)—this the study employed a mixed methods design because it was the best way to answer the research questions. A discussion on these paradigms is beyond the scope of this study save to say that they confine the researcher to a particular set of data collection methods or data analysis strategies associated with either of the traditional paradigms (Creswell & Plano Clark, 2011).
Introduction to the study Significance of the study
1.2.2 Significance in high school Euclidean geometry education
The significance of this study generally lies on the premise that research studies and international assessment bodies often rank Euclidean proof as one of the most difficult topics to teach and learn in mathematics. Thompson, Senk, and Johnson (2012) argue that some of the most persistent proof-related difficulties identified among learners in secondary school and university are a consequence of the confusion about the functions of proof in mathematics. This study will provide clarity by making available an instrument designed with this confusion in mind. First, to date, save for Shongwe and Mudaly’s (2017) work, no existing studies have validated the LFUP instrument for measuring learners’ functional understanding of proof. In addition, very little (if any) research has been done to characterise learners’ functional understanding of proof in Africa, not to mention in South Africa. This instrument is intended to enhance the knowledge base, classroom practice of proof education in the mathematics classroom, and inform research in the area of proof functions.
Put another way, this study sought to contribute to a broader knowledge base around understanding difficulties in the learning of proof in Euclidean geometry from the perspectives of activities prior to construction of proof.
Second, this study is one of the few to examine learners’ functional understanding of Euclidean proof and the factors that shape this understanding. In particular, it serves as a response to the recommendations of Mariotti (2006) that better insight can be gained from investigating the sources of understanding of proof that are inconsistent with those held by contemporary mathematicians. Usiskin (1980) points out that proofs in Euclidean geometry are different from proofs in other branches of mathematics. The LFUP instrument will be useful in high school mathematics classes as a tool from which instruction in Euclidean proof can be planned given that is has been validated and its reliability established. Reliability means that scores from an instrument are stable (be nearly the same when researchers administer the instrument multiple times at different times) and consistent (when an individual responds to certain items one way, the individual should consistently respond to closely related items in the same way (Creswell, 2012).
Validity is the development of sound evidence to demonstrate that the interpretation of scores
Introduction to the study Geometry in South African high schools
about the construct that the instrument is supposed to measure matches their use in, for example, statistical analysis to determine if factor structure or scales relate to theory, correlations, and so on (American Educational Research Association/American Psychological Association/National Council on Measurement in Education [AERA/APA/NCME], 2014; Messick, 1980). As Thorndike (2005) points out, this definition shifts the traditional focus on the three-fold types of validity, namely, construct, criterion-referenced, and content validity, to the “evidence” and “use”
of the instrument.
1.2.3 Significance in mathematics education monitoring
The Department of Basic Education established the Dinaledi School Project in 2001 for the purpose of raising previously disadvantaged high school learners’ participation and performance in mathematics and science (Department of Basic Education [DBE], 2009). Part of the budget in the department provides these schools with resources (for example, textbooks and laboratories).
The ultimate intention is to improve mathematics and science results and thus increase the availability of key skills required in the economy (Department of Basic Education [DBE], 2009).
In monitoring the performance of these schools, the education officials take note research studies that focus on these schools (Department of Basic Education [DBE], 2009). The fact that this study made findings relating to SA#3 (as mentioned in the next section) in the CAPS document should draw the officials’ attention as to whether the stipulations of this aim were achieved. It is reasonable to believe that these officials will have access to this finding given that one of the conditions of approval of this study is that upon its completion, a brief summary of the findings, recommendations, or this thesis in its entirety must be submitted their research office.