• Tidak ada hasil yang ditemukan

Chapter 4: Assessment in Higher Education 4.1 Introduction

4.1.1 The role of assessment in higher education

A survey of assessment literature suggests that there is broad agreement concerning the primary purposes of assessment in almost all formal educational settings. These have been defined by Luckett and Sutherland (2000: 101) (see also SAQA, 2001; Brown, 2001: 6; and CHE, 2004: 6) as:

Diagnostic assessment where the purpose is to determine whether a student is ready to be admitted to a particular learning programme and what “remedial action may be required to enable a student to progress” (Luckett and Sutherland, 2000: 101).

Formative assessment which is used to provide feedback on progress in a way that motivates students, improves learning, consolidates work completed and profiles what has been learnt.

Summative assessment which establishes levels of achievement at the end of programme and provides a grade which gives an indication of employability and future performance and a licence to practice.

Quality assurance which provides staff with feedback on the impact of teaching and learning activities, evidence of the degree to which programme outcomes have been achieved, and a means of monitoring the “effectiveness of the learning environment” and the

“quality of an educational institution over time” (ibid.).

While the broad classifications outlined seem self-evident it’s unlikely that a process as complex as assessment can be so neatly packaged. Instead, I would agree with Ramsden (1992: 187) who argues that:

14 Ecclestone and Pryor (2003: 473) suggest the notion of learning careers relates to the complex interactions between personal 

dispositions, learning strategies, structural and institutional conditions and peer norms that all have an influence on students’ 

motivation and attitude to learning. These interactions can shape how students choose to view themselves as learners and their  ongoing commitment to lifelong learning. 

41

Assessment is not a world of right or wrong ways to judge or diagnose, of standards versus improvement, of feedback versus certification: it is in reality a human and uncertain process where these functions generally have to be combined in some way.

For instance, while diagnostic assessment is generally considered to happen prior to a programme’s commencement, its contribution continues as courses unfold and teachers and students identify problems requiring individual and collective remedial action. Tasks intended to have formative or summative functions may also serve diagnostic purposes.

Similarly, on programmes where summative assessment is continuous and ongoing, assessment tasks cannot avoid having formative influence. Some theorists argued that conflating formative ambitions with high-stakes summative tasks may leave students disinclined to take risks (Luckett and Sutherland, 2000: 101; and Biggs, 1999a: 143). As Biggs suggests:

For formative [assessment] to work, students should feel free to reveal their ignorance and the errors in their thinking, but if the results are to be used for grading, they will be highly motivated to conceal possible weaknesses (ibid.).

However, the counter argument by Taras (2002: 504) and Boud (1995: 36) offers a more pragmatic perspective given the constraints on teaching time, and we at the SPI share Boud’s view that:

… we must consider both aspects together at all times. Too often assessment is led by the needs of summative judgment, not learning … assessment always leads to learning. But the fundamental question is, ‘what kind of learning?’,

‘What do our acts of assessment communicate to students?’ (ibid.).

This position is pertinent for the PDMM where the tightly bound modular structure of the programme means students are continually tackling summative tasks. However, the spread of marks allocated across the year means no single task could be regarded as having the potential to jeopardise a student’s chances of gaining the qualification. Furthermore, within an educational context that emphasises life-long learning, even exit-level assessment can be formative in providing feedback for future learning and professional development. The categories of Luckett and Sutherland (2000) provide a useful way of separating assessment roles, but category boundaries are seldom distinct. In my view assessment can most usefully be seen as relational with varying degrees of inter-category infusion.

In addition to agreement on the roles of assessment there appears to be a growing consensus regarding its place in the curriculum – both from a temporal perspective and its relationship

42

to learning. Boud (1995: 40-43) suggests that conceptions of assessment can be classified into the three dominant understandings detailed below:

Conventional assessment assumes that assessment follows teaching with the aim of discovering how much has been absorbed. Unseen examinations, in which students respond to a choice of questions, dominate this approach that assumes similar methods can be used across disciplines (ibid.).

Educational measurement “takes for granted the basic assumptions of conventional assessment: that is that testing follows teaching, the links between subject content and assessment technique are unproblematic and that assessment is quantitative” (ibid.) The object is to make assessment “more rational, efficient and technically defensible”. The use of multiple choice questions is the only significant addition to assessment methods emerging from this conception’s emphasis on reliability and validity (ibid.).

Competency and authentic assessment has emerged as a response to concerns about validity and the belief that what is assessed should reflect what graduates are meant to be equipped to do. This conception questions the validity of tests and unseen examinations and the use of contrived problems. Instead it promotes the use of “contextualised complex intellectual challenges over fragmented and static bits or tasks” (ibid.).

In the first two conceptions learning and assessment are understood as distinct, with assessment following learning. Assessment is largely summative and, as Brown and Glasner (1999: 157) argue, it is “seen as something that is done to learners and to their learning”.

Biggs (1999a: 143-144) suggests that these approaches are norm-referenced and designed to assess the “stable characteristics of individuals, for the purpose of comparing them with each other or with general population norms” (ibid.).The third conception, which Biggs (ibid.) defines as standards based, assumes an integral place for assessment in students’ learning experiences. It is not “peripheral to the course – a necessary evil to be endured. It is central to the whole course” (ibid.: 158). Assessment is criterion-referenced and designed to assess

“changes in performance as a result of learning, for the purpose of seeing what, and how well, something has been learned” (Biggs, 1999: 143-144). Rather than providing a basis for comparing students, assessment seeks to determine whether outcomes have been achieved.

Boud and Falchikov (2005: 39) contend that assessment “should be judged first in terms of its consequences for student learning and second in terms of its effectiveness as a measurement of achievement”. This position is embedded in SAQA’s Guidelines for Integrated Assessment

43

(2005), which state explicitly that “for assessment to be meaningful it should be fully integrated into teaching and learning and should guide decisions about the activities that will support and enhance learning” (SAQA, 2005: 14). Its primary function should be “understood as supporting learning” (ibid.: 13). It is this role of assessment in promoting learning and contributing to the development of students’ evolving identities as future professionals and lifelong learners that is the principal interest of this study.