• Tidak ada hasil yang ditemukan

Preliminary Findings from Phase Three

Moderator 2: Sending good wishes your way! Don't hesitate to get in touch with us if we can help in any way

3.2 Preliminary Findings from Phase Three

To begin the average scores on each of the quantitative questions included in the SMFS were calculated and compared across the two modules (PSY10090 and PSY10050). Figure 1 presents the mean scores on each of the nine questions. The graph suggests few

differences between the modules, with a more noticeable gap in relation to students' understanding of the subject (higher for PSY10050) and in relation to the contribution of module activities to learning (higher for PSY10090). However following completion of a series of two-tailed t-tests it emerged that the only significant difference was in relation to understanding of the subject (t = 2.2576, df = 81, p = 0.0237).

Figure 1: Comparison of ratings for PSY10090 and PSY10050 on quantitative questions In addition to examining the mean scores for the items, the frequency of responses was also examined and is presented in Tables 4 and 5 below.

Table 4: Proportion of valid responses to quantitative questions in PSY10090

Statement Mean

(SD)

Strongly Agree/Agree

Not Sure

Disagree/Strongly Disagree

I have a better understanding of the subject after completing this module

4.4 (0.66) 94.4% 3.8% 1.9%

I achieved the learning outcomes for this module

4.11 (0.7) 84.9% 13.2% 1.9%

The teaching on this module supported my learning

4.13 (0.79) 83.1% 13.2% 3.8%

The workload on this module was manageable.

4.19 (0.62) 92.5% 5.7% 1.9%

Learning materials made available on my module have enhanced my learning.

4.08 (0.83) 82% 12% 6%

4.03

4.13

4.03

4.24 4.23

4.1

4.27

4.1 4.4

4.11 4.13

4.19

4.08

3.81

4.17 4.14

3.5 3.6 3.7 3.8 3.9 4 4.1 4.2 4.3 4.4 4.5

PSY1009 0

The in-class activities in this module helped me to learn.

3.81 (0.9) 75.5% 11.3% 13.2%

The assessments to date were relevant to the work of the module.

4.17 (0.85) 83% 11.3% 5.7%

Overall I am satisfied with this module 4.14 (0.85) 82.4% 11.8% 5.9%

The responses echo the generally positive scores evident above in that the majority of students responded positively to all of the statements for both modules. Interestingly the item that showed the largest proportion of negative views in both modules was The in-class activities in this module helped me to learn.

Table 5: Proportion of valid responses to quantitative questions in PSY10050

Statement Mean

(SD)

Strongly Agree/Agree

Not Sure

Disagree/Strongly Disagree

I have a better understanding of the subject after completing this module

4.03 (0.81) 76.7% 20% 3.3%

I achieved the learning outcomes for this module

4.13 (0.78) 76.7% 23.3% - The teaching on this module

supported my learning

4.03 (0.93) 76.6% 20% 3.3%

The workload on this module was manageable.

4.24 (0.79) 86.2% 10.3% 3.4%

Learning materials made available on my module have enhanced my learning.

4.23 (0.82) 93.4% 3.3% 3.3%

The in-class activities in this module helped me to learn.

4.10 (1.03) 83.3% 6.7% 10%

The assessments to date were relevant to the work of the module.

4.27 (0.74) 90% 6.7% 3.3%

Overall I am satisfied with this module 4.10 (0.92) 76.6% 16.7% 6.7%

In addition to comparing the ratings for the two modules on these key questions it is also possible to compare ratings within PSY10090 over the past three years, with the point of interest being the possible impact of making the group workshop a summative assessment (as occurred in 2012-2013). Figure 2 below presents the ratings on common questions from the last three years (though only one semester's ratings are available for this year). There is some evidence of trends including decreases in relation to understanding, supportive teaching and increases in achieving learning outcomes and the positive impact of activities.

Figure 2: Comparison of ratings of PSY10090 over time

A final possible impact of the move to include the group assignment as a summative assessment relates to the number of students registering for the module. Figure 3 presents the registration figures by semester and overall and it appears that registration figures have dropped in 2012-2013 by almost 20%. It is suggested that this may be the result of students' ability to review module details (including assessment) in advance of registering for a

module at the start of the year. It is clear that registration figures in 2010/11 and 2011/12 were stable and were overall higher than evident in 2012/2013. However it is also possible that the introduction of a mandatory study skills module in 1st Arts in UCD in 2012/13 may have impacted on the uptake of this module also, as students had less choice of electives as a result. However it is noted that comparable decreases were not noted in PSY10050 for this year.

2.5 3 3.5 4 4.5 5

2012-13 Semester 1 2010-11 Semester 1 2010-11 Semester 2 2011-12 Semester 1 2011-12 Semester 2

Figure 3: Registration figures for PSY10090 Discussion

As noted at the outset of this paper the current research is part of a larger study on the use of group work in large first year psychology modules. The larger study builds on some of the issues highlighted in the research literature such as the role of group work in assessment, the need for a structured approach to group work and the potential contribution of group work in first year modules. The aim of the study is to develop, manualise and evaluate a system of in-class workshops for use as both formative and summative assessment.

This paper has focused on the contribution of in-class group workshops as a form of summative assessment. In describing the group workshops it is noted that a number of aspects highlighted in Meyers' (1997) review on promoting participation in group activities have been addressed in the design of the workshop. Detailed information is provided in advance to support participation, however only limited information on group functioning was provided (as part of the introductory skills class). Also while the group activity is assessed this is predominantly instructor led assessment, though the individual reflection does reflect an element of self-assessment, as students are invited to reflect on the group process.

However there is clearly scope to consider adopting a more formal approach towards self and peer evaluation of the group interaction. In addition the research by Pauli et al (2008) considered in the introduction might serve as a resource allows the lecturer to maximise the functioning of the groups and in turn students' experience in the groups.

Looking to the formal evaluation of the module, students taking PSY10090 were compared with students taking a module using more traditional approaches to assessment

(PSY10050). Student feedback was gathered from both modules using self-report surveys incorporating qualitative and quantitative data. While the analysis of the qualitative data is ongoing the quantitative data highlighted few differences between the students' ratings of the two modules. The lack of difference in ratings of module assessment and in-class activities across the two very different modules may reflect a lack of added value from the more intensive assessment used in PSY10090 or the comfort that comes from familiar

174

238

412

257 261

518

265 259

524

0 100 200 300 400 500 600

Number Registered

assessment techniques (which the more traditional assessments might represent). However it is also possible that an inherent concern regarding group work (which is evident in the existing literature) might be undermining students' experiences in PSY10090. It is hoped that an analysis of the qualitative survey data will provide further insight into these issues.

Nevertheless, it does appear that the introduction of the group-based summative

assessment may have had an impact on registration figures, however as noted above other contextual factors might also be contributing to these changes.

Conclusion

The data gathered on students' experiences of the introduction of a summative group-based assessment might suggest that the reaction to this approach has been somewhat mixed, though it should be noted that the scores remain generally positive (averaging at around 4 out of 5, with 5 indicating strong agreement with the key statements). However there is limited evidence of any added value from this more active approach to assessment, though it is noted that the findings presented above may lack the insight that the findings from the qualitative data gathered might provide. Considering the implementation of formal

summative assessment in relation to the published literature it appears that a number of key issues noted by writers in this area have been addressed. Nevertheless there is scope to further examine the barriers and supports for effective group assessment in these large undergraduate modules.

References

Eglash, A. (1954). A group-discussion method of teaching pychology. The Journal of Educational Psychology, 45(5), 257 - 267.

Livingstone, D., & Lynch, K. (2000). Group project work and student-centred active learning:

Two different experiences. Studies in Higher Education, 25, 325-345.

Meyers, S.A. (1997). Increasing student participation and productivity in small-group activities for psychology classes. Teaching of Psychology, 24(2), 105 - 115.

Pauli, R., Mohiyeddini, C., Bray, D., MIchie, F., & Street, B. (2008). Individual differences in negative group work experiences in collaborative student learning. Educational Psychology:

An International Journal of Experimental Educational Psychology, 28(1), 47-58.

Student lead assessment in level one computing:Encouraging reflective practice

Chris Martin and Janet Hughes University of Dundee, Scotland

ABSTRACT: Assessment of computer programming tends to be based upon product rather than process: a student's competence in programming is commonly measured via the final code produced. However it is also important to encourage students to reflect on the software development process and hone the skills they need whilst producing code. Particularly it is important to teach the act of programming and avoid dependency on any one given

language: throughout their careers, computing professionals need the ability to learn new and emerging languages. For students to be autonomous in their learning, they need to be equipped with self-reflection and analysis skills and encouraged to take a deep approach to their learning. This paper discusses two techniques used in an introductory computing course to develop students' reflective practice: (i) triadic assessment (Gale et al, 2002) of weekly deliverables via group work and (ii) a student generated multiple-choice class test.

The design and evaluation of each technique is described and discussed.

Triadic assessment of group work

Group work is a common component in many undergraduate modules in computing as well as in other disciplines. There are sound practical and pedagogical reasons for creating this learning experience (Thorley & Gregory, 1994). However, assessing group work can be challenging, particularly in a situation where different group members assume different roles and responsibilities within a group. For instance, what is an equal share of work? How do you weigh up design input against technical contributions? One technique to encourage students to address these questions and to take a lead role is peer assessment. Its benefits include a rich engagement and understanding of the assessment process (McDermott et al, 2000). The learner is also encouraged to engage in higher cognitive skills (Fallows and Chandramohan, 2001) and critical evaluation (Anderson et al, 2001).

Motivation for Group Work

Professional software developers commonly work in teams, whether employed in fledgling start-ups or in multinational companies. The stereotype of the lone 'geek' absorbed in the act of programming, with little input from others, is a misconception. For this reason. the skills associated with working in a team are vital to a successful career in software development.

Team projects are therefore commonplace among undergraduate computing degree programmes, but they can be problematic. The social skills required to manage workload distribution, to schedule team meetings and to deal with different levels of ability can be challenging to students.

In the context of computing, it is likely that a team project will involve the development of a specified piece of software in response to a brief or consultation with a client. With this approach, team working skills are placed in a motivating context. However this can present a challenge to academic staff assessing the resultant coursework. Assessing the products against the learning objectives identified is straightforward; allocating a mark for each of the team members for the process performed can be contentious.

One approach is to treat the team as a whole and give the same mark to each team member. This places pressure on the team to function: if one member fails to deliver, the whole team suffers. Unfortunately it also presents the opportunity for students to 'hide' from individual assessment. There is a risk that weak performances by students may be identified only by individual summative assessment and may not by on-going formative assessment.

Providing each student with an individual mark that reflects their contribution to the project would appear to be the fairest approach. Defining contribution then becomes important.

Over the course of many weeks of creative thinking, designing, implementing, crafting and refining, the key questions remain - who did what? What was it worth? What grade should each person get?