• Tidak ada hasil yang ditemukan

The two spreadsheet exercises were run sequentially as a logical progression of theory

introduction was presented. Once the theory had been discussed in class students would be able to start the assignments immediately. To aid this process both assignments were handed out at the start, so students could work on either or both at the same time. The pace at which students progressed was to be governed essentially by themselves, within the time constraints of the study. They set up their own spreadsheets once the necessary theory had been covered, either during lectures, or by their own studying. The tutorial sheets covering the sections were also handed out at this time so that they could work on them immediately.

3.8.1 Group allocation and filenames

The students were first broken down into teams of three (maximum), following the South African Qualifications Authority (2001, p.36) guideline of “paired or group activities”. This was initially done on an ad hoc basis, bearing in mind the warning highlighted by Heywood (2000, p.210) in Chapter 2.4.1.2. Thus group members could swop to other groups, but only for a limited period of time. Each group was assigned a group code for each assignment (see Appendix A), which only they knew, and was used as their file name and placed on all official records. This ensured the anonymity of the groups, and students could not find out in which groups their friends were. Also, from a marking point of view, when assessing another group’s project, they would not know who was in the group they were assessing as this code was the only information given to assessor groups at the start of the assessment. Once opened, the assessor team was allocated another filename under which to save the file after they had assessed it, so as not to overwrite the original team’s file. Further, each assignment was given a new group code so that they could be recognised independently and also to keep the data for each assignment intact.

3.8.2 The assignment handouts

The handouts for each assignment can be seen in Appendices B and C. For consistency, the layout of each one was kept the same. Also, the instructions and requirements for each task remained similar. The handout gave a detailed description of what was required of each team,

59 including possible penalties (mainly to do with locking their spreadsheet so that assessor teams would not be able to open it to assess it), the assessment criteria and requirements. They were also informed of the moderation protocol in the handout, being 10% of the completed

assignments to be evaluated by the lecturer/marker. It also cited the DIT Rule Book for Students (2006, p.27) pertaining to copying.

3.8.3 The assessment and moderation

Boud (1989, p.26) points out that “students should not expect to do anything unless it is marked.” In this project, the mark allocations for the semester appear in Table 3.4. The test 1 mark was divided into two parts, 5% for each computer assignment and 30% for the summative test 1 itself, as described previously in Chapter 3.5. This was to ensure that the work was rewarded, albeit in a small way. Also, if the new assessment method did not prove to be successful then it was not a high stakes component, as highlighted in Chapter 1.4, and hence would not prejudice the students’ marks significantly.

To assess the projects in a fair, valid and consistent manner (South African Qualifications Authority, 2001, pp.16-17) a marking rubric, discussed in Chapter 4.1.5, was designed (see Appendices E and F) and put on display at the start of the project, both in the computer laboratories and outside the thermodynamics laboratory. Only the sample problems, discussed in Chapter 4.1.6 and to be solved at assessment time, were left off. Thus the assessment process was “clear, transparent and available to all learners” (South African Qualifications Authority, 2001, p.17) and students were aware of all aspects of the task and its assessment from the beginning and they could use it as a guide at any time. This also made the assessment more legitimate and credible (South African Qualifications Authority, 2001, p.12, 27). It was also an attempt to follow the principle and guideline that students should be able to “analyze, organize and critically evaluate information”, one of the required critical cross-field outcomes of OBE (South African Qualifications Authority, 2001, p.24). At the same time it allowed for the assessment of “the learner’s peers” (South African Qualifications Authority, 2001, p.36), discussed in Chapter 2.5.1.2. By the time students assessed their peer’s work they had done it themselves, so had an idea of the requirements. They would also see alternative ways of completing that task, thus moving around Kolb’s Learning Cycle described in Chapter 2.3, reflecting on their own and others attempts at the task. The layout for both assessment rubrics was similar. There were slight differences since the assignment requirements were different, the

60 most notable one being that a graph was required for assignment 1 whereas it was not required for assignment 2. If one compares the assessment rubrics (Appendices E and F), all three programme exit level outcomes would be used in the assessment process as well as all the programme specific outcomes and many of the assessment criteria (see Appendix L), If one looks at the learning outcomes for the subject (Appendix W) compared to the two assignments it is noted that seven of the twelve components in the Introduction – basic concepts and three of the four Systems and Laws – basic rules section are covered in the computer assignments. These programme specific outcomes and subject learning outcomes are required for the computer assignments, the knowledge of their use being determined in the concept test, discussed in Chapter 3.10, as well as in the class tests detailed in Chapter 3.5.

The sample problems to be solved on the assessment dates were only made available at

assessment time, on the assessment rubrics. There were five possible sample problems for each assignment, one randomly given to each group on the assessment day. They were similar to problems the students had solved previously in their tutorials. The answers to these problems, as seen in Appendices N and P, were available on the assessment day from the lecturer and

assistants, but only after the students had completed the exercise of assessing another group’s work, to cross check the answers and graphs obtained from the assessed spreadsheets.

It has been mentioned by researchers of peer assessment that the marks by students are not necessarily reliable

(Haywood, 2000, p.376).

Within the intervention was a valuable tool for assessing how well the students were coping with the intervention, namely peer assessment, as it illustrated how well or otherwise they used the tool and also the programme criteria and outcomes mentioned earlier. To further see that the student assessment was fair and valid (South African Qualifications Authority, 2001, pp.16-17) staff were to moderate a portion of the exercises (10% of the exercises as specified in the assignment handout — see Appendices B and C).A moderation weighting factor and an adjustment factor were generated and used in an attempt to normalise the marks. This is discussed in detail in Chapter 4.1.7.3 and 4.1.8.3.

On the day of the assessment each group had to sign a declaration form (see Appendix G), stating that the work completed was their own. If they chose not to have an equal share in the mark allocation then they could declare their weighting of the marks at that time.

61