• Tidak ada hasil yang ditemukan

CHAPTER 7.............................................................................................................................. 116

7.3. Perceptions of the staff regarding the use of OSCE as a clinical tool

120 Students overwhelmingly perceived that the OSCE in Pediatric had good construct validity (Imani and Hosseini Tabatabaie 2005). Several studies have shown that the OSCE provides a valid and reliable assessment of the roles.

The study also revealed that there was a significant difference between people with

exposure to OSCE and people with no exposure to OSCE due to the low levels of agreement on the processes of managing an OSCE. There were no significant differences in any of the mean standard scores by gender, qualification, experience with OSCE and years of

experience.

7.3. Perceptions of the staff regarding the use of OSCE as a clinical tool

121 These results pose a major concern about the use of OSCE to assess the clinical competencies of Critical Care nurses, since the level of agreement for physical examination was very negative at 42.3%. Every Critical Care nurse should be able to perform physical examinations, and students should be able to demonstrate their abilities to perform physical examinations during assessment. A contributory factor to these perceptions may be due to the use of low models congruent with the students’ perceptions that the

standardised patients were not realistic., McWilliam and Botwinski’s study (2010) shows that students appreciated the authenticity and valued the OSCE experience in their

education (McWilliam and Botwinski 2010). Jeffries (2006) stated that it is necessary for the faculty to make decisions about implementing assessment instruments so that the

instrument is not only desirable and appropriate (reliability and validity), but also considers factors that make the instrument practical and achievable, such as cost, easy administration and acceptability to candidates and examiners (Jeffries 2006).

A second concern is the low rating for Professional and Interpersonal Skill with the lowest level of agreement on the “Knows” level was in respect of the use of OSCE to assess research skills which was 21.2%. Salwa et al. (2011) highlighted that the tools for

competence assessment are too task-orientated, while concepts of caring, interpersonal interactions and decision-making are known as competencies that cannot easily be measured quantitatively (Salwa, Samah et al. 2011). This means that although all the learning objectives of the course can be mapped in blueprinting, it is not feasible to measure all the skills that are necessary in an ICU environment using the OSCE method.

The analysis of the results revealed that all Critical Care nurses agreed that OSCE was a helpful exercise to assess the level of knowledge and the level of practical knowledge. The

122

Knows” level of assessment had the highest level of agreement with the average level of agreement being 16.7(±4.4.). However, as the main purpose of the OSCE is to rate skills this is a concern in either how the OSCE is used or how the staff understand its purpose.

With regard to OSCE’s positioning on the “Show how level”, most participants agreed that OSCE can be used to assess how the student actually performs a required skill in the Critical Care course. For the transferability of skills tested using OSCE as an assessment tool to real life situations; the highest score for “Does” was 65.5%which was achieved for Monitoring intake and output, overall, the level of agreement on the “Does” was 9.9 (±6.01). Providing health education to the patient regarding peritoneal dialysis and

Monitoring of intake and output were the only skills that scored above 60%. These results indicate that OSCE is not a good guide for determining how the student will actually perform in a real situation. However, this view is opposite to the perceptions of students regarding the transferability of skills from OSCE to real situation, where student’s level of agreement score was 90.9%.

To assess the attitudes of CCNs towards the use of OSCE as assessment tool, the results obtained reveals that forty –four (84.6%) agreed that OSCE was appropriate for evaluation of knowledge, practical and intellectual skills. When comparing the level of agreement on the use of OSCE alone as assessment tool or changing to other method of assessment, about half of the participants (26.5%) felt there was no need to change the use of OSCE to another method of assessment. Many authors believe that OSCE is the most reliable assessment tool, although the cost of carry out an OSCE can seem overwhelming, it must be remembered that it can form one of the most instinctive and memorable teaching experiences for the student.(Brannick, Erol-Korkmaz et al. 2011)

123 Regarding the selection of the type of patients to be used in OSCE assessment, the analysis revealed that the participants who preferred the use of simulated patients were similar to those who preferred the use of real patients in OSCE. Many authors pointed out that there is no evidence that the use of simulation is transferable to real clinical practice (McCaughey and Traynor 2010), although Battles et al.(2004) believe that SP-based OSCEs are valid and reliable tools for assessing competency continuity(Battles, Wilkinson et al.

2004). Use of simulation in the health profession is gaining interest, because it increases patient safety(Bearnson and Wiker 2005). This meant that the participants differed in their view about the use of patients during the OSCE assessment and this needs further

investigation.

What has also emerged from data is the evidence that only 50% of the participants felt that OSCE should be changed to another form of assessment and only 13(25%)

suggested the use of a comprehensive assessment, while only four were not specific regarding the type of assessment to be used. This may be due to the fact that OSCE has been used for a very long time and most of these participants are familiar with OSCE as the major method of assessment in the Critical Care course, while comprehensive assessment of students doing Critical Care nursing only began towards the end of 2011.

124