5.2. Analysis of data
5.2.1. Using the test prepared by the teacher
5.2.1.1. Analysis of test questions
The questions in the test were categorised according to the different cognitive levels suggested by Du Tait (1992).
Knowledge: All the questions required the learners to recall the knowledge gained by observ,ing the step-by-step techniques demonstrated by the teacher.
Question 2 (a) in particular called upon the learners to remember what an additive inverse is and how to obtain an additive inverse.
Computational Skill: Questions 1 (a) through t,o 1 (d), as well as question 2 (a) were designed to require stra1ightforward man1ipu/ation on decontextualised problems according to rules that the learners should hav,e remembered.
Comprehension: Quest,ions 1 (e) and 2 (b) required understanding of the underlying concepts and required interpretation of the significance of the data.
Learners were not given the equation to solv,e ,in a decontextualized format.
Here the learners had to decide how to formulate an equation as well as solve the equation.
Application: There were no questions that required the learners to apply relevant ideas, principles or known methods to new situations. There are thus no
questions that required the combination of more than one line of thought.
Inventiveness: No non-routine application questions were posed. The learners did not have to develop their own techniques for solving the problems. All the questions prepared for the test were similar to questions demonstrated by the teacher on the chalkboard.
Question 1 (e) is not an equation so the instruction "Solve these equations by writing all the steps" is not applicable to this word problem.
A classification of addition and subtraction word problems was devised by the Unit for Research on Mathematics Teaching at the University of Stellenbosch (RUMEUS) and used in a document developed by Ou Toit et al (1993) of the former Cape Education Department. These word problems were subdivided into
Change, Combine (part-part-whole), Compare or Equalize categories that may be identified as follows:
Change: Start with a single collection and either add to it or remove from it to result in a larger or smaller collection. Here action is implied.
Combine: Start with more than one collection that are united (part-part-whole) or separated to find the whole or parts of the whole.
Here a static situation is implied.
Compare: Start with two or more collections. It is implied that the difference will persist i.e. the operation addition or subtraction is merely to determine the extent of the difference. Here a static situation is implied.
Equalize: Start with two or more collections. It is implied that the difference between the sets will be removed. Here action is implied.
Using this classification of additiion and subtraction word problems proposed by RUMEUS (du Tait et al, 1993) questions 1 (e) and 2 (b) may be described as Combine problems. In each case information is given about the whole and the various parts required to ma'ke up the whole is to be determined. In both questions, there is a static situati-on as no action is imp.lied. All the questions, except 1 (e) are decontextualised problems.
5.2.1.2. Analysis of test scor,es
The marks _gained by the 1learners were grouped using the class intervals 0% to 19%, 20% to 39% etc up to ,80% to 99%. Table 1 shows the number of learners with test scores in each dass interval.
Table 1
Bar chart showing test percentages 12 ---
· a:; 10 ---==---r.n
1 ro 8
' Cl)
I i.
o
1.... 615
4E ::::;
z
0 0 - 19 20 - 39 40 - 59 60 - 79 80 - 99
Class intervals (%)
Bar chart showing learners' performance in a test that was set by the teacher but marked by the r;esearcher
T010084
The modal class was 40% - 59%. 42% of the learners' scores was less than the mean. Both the mean and the median are 41 %. The range of the percentages is 0% - 86%. The standard deviation is 35,24% which indicates that there is a wide spread of the percentages about the mean. The inter-quartiile range is 23% - 59%, ,i.,e. the middle half of the lleamers scored between 23% and 59%.
question. Using this data it was possible to grade the questions according to those questions at w1hic.h learners pertormed better or worse. Table 2 shows this percentage for each parNcu1lar question.
Table 2
Graph show1ing t,otal marks obtained by learners fo:r particular questions expressed as a percentag,e of total
poss·ible score for a question
80% �---,---,---,---, •. �'.�
60%
40%
. 20% ·
I 0%
.L...J=L--
1 (a) 1 (b) 1 ( c) 1 ( d) Question
·;-.::.
1(e) 2(a) 2(b)
Bar chart showing marks gained for each test question expressed as a percentage of the total possible score for each question
The 1learners' test scripts were then analysed 1in an attempt to explain possible methods the learners used to answer the questions and the conceptions that may have guided them. Possible-trends were sought in tihe manner in which the -learners tack'led the problems posed by the teacher. Making use of interviews with the learners would have enhanced the analysis. ,If each learner were given the opportunity t,o explain how she or he understood the problem and solution, then a more accurate, meaningful analysis could have been prepared. It would have been useful to listen to how leamers obtained the correct as well as incorrect solutions. As the school is situated in such a remote area, it was not possible for the researcher to revisit the school.
In order to eri1sure the anonymity of each learner, names were changed to letters of the alphabet. When the scripts were collected those of learners seated next to each other were kept to9ether. Hence, for example, learners designated M1, M2 and M3 were s'itting at the same doubJe desk.