• Tidak ada hasil yang ditemukan

CHAPTER 7.............................................................................................................................. 116

7.2. Perceptions of students regarding the use of OSCE as a clinical tool

OSCE is still a credible tool accepted by students as is portrayed by the positive response in the general evaluation of OSCE. A major limitation of this section was the low number of participants, though it is believed that even if the data cannot be generalised, it does present a credible perspective of students’ experiences with OSCEs.

The students’ attitudes towards the use of OSCE as a tool to assess their clinical skills were measured using four factors describing four major factors that impact on the

usefulness of OSCES, namely content, organisational, process and individual factors.

Overall, the Critical Care students agreed that the use of the OSCE as a tool in testing knowledge and skills in ICU training was positive, with an overall mean level of agreement of 71 ±14.4. The highest level of agreement was for Content factors with an agreement level of 81.8±26.1. Students overwhelmingly expressed positive attitudes about the use of the

117 OSCE to assess their competencies in the ICU, with the ratings of agreements being as high as 81.8% in the content construct, at which the highest level of agreement for OSCE’s relevance to their level of training was 90.9%. These findings are similar to those reported in a study done by Imani and Hosseini Tabatabaie (2005) on paediatric students which maintains that 85% of the students agreed that OSCE was comprehensive and covered a wide range of knowledge, while 80% agreed that clinical competencies in paediatrics were covered. Similar results on the suitability of OSCE to assess the content factor have been reported in a survey by Bagri et al. (2009) on the geriatric medicine fellows’ experience and attitude towards an OSCE, where most participants agreed that OSCE tested skills relevant to their practice, with a mean score 0f 4.75/5. This is also consistent with other findings in the literature; for example, Jay (2007) states that in the qualitative study done on student midwives to measure their perceptions of OSCE, to check how valid the assessment tool was, all students agreed that the OSCE workstations were relevant to practice.

In light of these findings, various authors have suggested that, when setting up for OSCE, the faculty must ensure that the clinical tasks chosen for the OSCE are mapped onto the learning objectives of the course and the candidates’ level of learning. This is referred to as blue-printing (Boursicot and Roberts 2005; Gupta, Dewan et al. 2010).

In this study, some students raised concerns at the range of knowledge, as students felt that the knowledge gained when using OSCE was not wide enough, this meant that OSCE did not cover all of the discipline as expected.

Regarding the individual factor, students’ findings indicated that students were positive about the ability of OSCE as an assessment tool which helped them to incorporate what had been learned into clinic practice. This meant that students felt that the

transferability of skills to real practice was possible when using OSCE as an assessment

118 method. Most students expressed a positive response regarding the fact that they had learned a lot from having OSCE as a tool to assess their clinical competencies.

Most students reflected that OSCE provides a useful learning experience and that the content reflected real life situation though only 68.2% students believed that passing or failing an OSCE examination was a true reflection of their performance. Nearly half of the students expressed concern over potential bias due to inter-patient and inter-evaluator variability during the use of OSCE. This is in line with the results of the survey done by Bagri et al. (2009) where some students expressed concerns that inter-patient variability and inter-evaluator variability might affect their score. The study by Imani and Hosseini

Tabatabaie (2005) revealed that students also expressed concern and were uncertain that the results were a true reflection of their clinical skills (Imani and Hosseini Tabatabaie 2005).

Although this issue is not seen by all students as a major problem, further studies are necessary to investigate it.

Another concern raised by the students was that OSCE produces more anxiety than other methods of assessment. Many studies surveying student attitudes towards the use of OSCE have documented that the OSCE can be a strong anxiety-producing experience and some believe that the level of anxiety changes little as each student progresses through the examination. Marshall and Jones (2006) contend that OSCE is undeniably anxiety-

provoking, but that seminars provoke more stage anxiety. Other authors believe that rating nursing skills using assessment methods that stress the functional characteristics of practice may lead to interfering too greatly with performance, and losing the ability to differentiate between nurses with functional skills and those with deeper personal qualities (Cowan, Norman et al. 2005). Brand and Schoonheim-Klein (2009) assert that there is a general belief that students with higher levels of stress tend to achieve lower marks than students

119 with lower stress levels (Brand and Schoonheim-Klein 2009). Therefore, examiners use different types of assessment because most assessment strategies will suit certain types of learners better than others (Garside, Nhemachena et al. 2009).

Regarding the organisational/environmental factors, the level of agreement was positive concerning the structure of OSCE as well as the instructions given at the OSCE stations. Most students expressed satisfaction about the manner in which OSCE is

structured, and were also satisfied with the instructions given at the stations because these were clear. On this construct, some students were not satisfied with the standardised patients, so this had the lowest level of agreement for this construct.

Out of the four factors used to measure the attitudes and perceptions of the use of OSCE to assess clinical skills, the process factor showed the lowest level of agreement. The findings of this study revealed that students identified the need for the faculty to give feedback which would help them in highlighting areas of weakness; however there was a general concern that the faculty was not giving detailed feedback. Carr (2006) believes that if feedback is interactive it helps the students to progress in their studies and to grow professionally. In the cross-sectional study done by Awaisu et al. (2009), about 80% of students in this study believed that OSCE was useful in discovering the areas of weakness and strength in their clinical skills.

To address validity and reliability, the results revealed that 15 (68.2%) of the

students in the study believed that the pass score is true measure of clinical skills. This is in line with the results of the study by Imani and Hosseini Tabatabaie (2005)-on the use of OSCE in paediatrics; half of the students believed that the scores were standardized, they were unsure whether their scores were an actual reflection of their pediatric clinical skills.

120 Students overwhelmingly perceived that the OSCE in Pediatric had good construct validity (Imani and Hosseini Tabatabaie 2005). Several studies have shown that the OSCE provides a valid and reliable assessment of the roles.

The study also revealed that there was a significant difference between people with

exposure to OSCE and people with no exposure to OSCE due to the low levels of agreement on the processes of managing an OSCE. There were no significant differences in any of the mean standard scores by gender, qualification, experience with OSCE and years of

experience.

7.3. Perceptions of the staff regarding the use of OSCE as a clinical tool