Concept 7: Assessment How is your supervision assessed/
2.5.8. Assessment
2.5.8.2. Assessment of learning/summative assessment
59
60
work more to cover curriculum content (Harlen and James, 2005). In the South African context this is true because when it is time for administering provincial or annual national assessments tasks (ANA), the provincial Department of Education only sends one copy of the question paper per subject to the schools. It is the responsibility of the school to reproduce those question papers with the limited resources that they have. Again, educators have to ensure that they rectify mistakes in those papers, because they always come with lots of mistakes.
Besides dominating the system, externally set tests are mostly not standardised. This forces educators to spend more time focusing on revising previous question papers which takes too much teaching and learning time. This is because, according to Harlen and James (2005), the results of these externally set assessments require high stake results. However, this negatively impacts on the learning experiences of educators and on the nature of assessment. Black et al.
(2010) revealed that not only do the effects of externally set summative assessment affect educators, students are equally affected. Based on its purpose of the summative assessment at the point in time, it can have a negative impact on students learning and achievement. In most cases learners underperform. Reflecting on the Grade 3 Mathematics performance on the ANAs from 2011 to 2014; there is slight increase on learner performance. Looking at the diagnostic reports, one may still argue that the improvement is not impressive because what we see from each assessment, the quality of the question papers have been compromised, may be to accommodate the diverse needs of learners. Of most concern also is the deterioration of performance in Mathematics from Grade to Grade. The gap within the Foundation Phase and between Foundation, Intermediate and Senior Phase also triggers a reason for concern. See the table below.
GRADE MATHEMATICS AVERAGE PERCENTAGE MARK
2012 2013 2014
1 68 60 68
2 57 59 62
3 41 53 56
4 37 37 37
5 30 33 37
6 27 39 43
61
9 13 14 11
Table 5: Adapted from the Department of Basic Education’s ANA report 2014 (DBE, 2014). GET Mathematics of learners performance in ANA between 2012 and 2014
There is consensus in literature that testing has a negative impact on students‘ motivation to learning. For this reason, in her investigation, Dwyer (2006) discovered that in many countries educators resist using external tests in their classroom. There is also substantial evidence in literature that ―assessment information for both summative and formative purposes, without the use for one purpose endangering the effectiveness of the other‖ (Harlen and James, 2005, p. 215). Ussher and Earl (2010, p.57) agree: ―the reality is that any assessment information gathered for the purpose of informing learning (formative assessment) could also be used to make judgement about learning to date (summative assessment), and vice versa‖. One of the findings in Black, Harrison, Hodgen, Marshall and Serret (2010) was that some countries have also adopted the notion of using formative assessment to support summative assessment. This is due to the great need of quality teaching, learning, and accountability. This suggests that ―quality information gathered using valid and reliable assessment tools and types could be used both formatively and summative‖ (Ussher & Earl, 2010, p.60). In Grade 3 the main techniques of formative and summative assessment are observations by the educator, oral discussions, practical demonstrations and written recording. However, Black et al. (2010) warns against this practise, despite the fact that they can yield positive results, arguing that they are placed at risk when policy makers decide that externally imposed summative tests will improve education. This suggests that if educators adhere to the assertion of those who agree with using the two forms of assessment to support each other, their quality of teaching and learning will improve. Furthermore, Black et al. (2010) suggests that ―to understand and improve tensions between formative and summative assessment [we] should start by exploring the quality of educators‘ summative assessment‖ (p. 16).
Studies conducted have shown that educators are familiar with the terms formative and summative assessment but there is still an absence of clarity of purpose reflected in the way they respond to how they understand them (Ussher & Earl, 2010). To support the statement Black et al. (2010) discovered that educators lack skills and confidence in assessment. They also found out that their literacy level of assessment is poor. Harlen and James (2005) discovered that educators understand summative assessment as a process in which they gather
62
information of their students‘ learning in a planned and systematic way. They base this on their professional judgment, which may not be reliable. This suggests that educators‘
understanding of assessment is flawed, hence the need for SAs, on their practices including their understanding and conducting of assessment.
Harlen and James (2005) maintain that ―there are several potential advantages in using educators‘ judgment in summative assessment for external and internal use‖ (p. 212). She maintains that it is working well in some countries. In contrast Dwyer (2006) argues that assessment, whether formative or summative; increase the difficulty of educators exercising their professional judgment about their students‘ performance. CAPS Mathematics Grade 1- 3, states that formative assessments should be continuous in order to monitor learners‘
progress and make daily instructional decisions. This type of assessment is not used for progression purposes. On the other hand there is also formal assessment, which is summative in nature. An annual programme for summative assessment should be established to indicate the number of formal assessment tasks to be completed. In Grade 3 there are a total of 10 formal assessment tasks whereby a learner must obtain at least 40% for progression to Grade 4. In addition there is a detailed plan of assessment specifying what is expected for each task, including mark allocation. However, Ojuko et al. (2013) states that ―It is disappointing to know that educators are usually provided with common teaching and evaluation syllabi with the numbers of tests to be conducted per term‖ (p. 114 ). This poses a challenge to the quality and value of assessment activities leading to undermining the whole purpose of assessment.
This is because educators focus only on formal assessment tasks to be conducted and neglect other types of assessments, i.e. the informal ones (assessment for learning) and another type of assessment, which is not as loud as the latter two, called assessment as learning.
There is evidence of great concern regarding summative assessment. Concerns are rooted around the issues of moderation, validity and reliability of these summative assessments.
Black et al. (2010) claim that moderated educators‘ internally set assessment could produce results with validity and reliability at least, as compared to externally set tests. Of a different opinion, Harlen and James (2005) points out that how moderation is done ultimately controls and limits the educators‘ use of the full range of available evidence. For example in ANA, it is required that only sampled schools are to be moderated, and in those schools about 20 learners in each grade are selected randomly for moderation. This is contrary to how educators conduct moderation in schools. In the school context, teachers select a few learners
63
and this selection will include learners who performed above average, average and below average. Their scripts will then be used to draw up conclusions and suggestions on how they can improve or enhance teaching and learning. Whichever way we perceive it; educators play a central role in assessment. For this reason, studies suggest that: ―Educators must have the ability to create and evaluate assessment tools of quality to ensure they will be gathering high quality evidence about progress and achievement‖ (Ussher & Earl, 2010, p.60). Furthermore, they must have knowledge and understand that the information that they gather is fit for its intended purpose. In their study, Black et al. (2010) found that educators did not take consideration whether the assessment information they used served the purpose of reporting and making decisions about setting assessment tasks in a dependable way. This makes it crucial for educators to note that validity and reliability are dependable, meaning that reliability is optimised while ensuring validity (Harlen, 2005).