50
40 30 20
10
O - l " - - - , - - - , - - - r - - - ( '
1-5years
• Series1 24.4
6-10years 15.4
11-15 years 15years +
129 46.8
When asked to indicate the approximate percentage of students in a typical class they teach, the results as indicated inFigure2were: African students: 186,Indian: 127; Coloured: 137;
White: 161;other (including Chinese/Asian): 51.
Figure2: Percentage Breakdown o/Students in a Typical Class
Descriptive Statistics
N MInimum Maximum Mean Std. Deviation
AFRICAN 186 0 100 46.96 35.73
INDIAN 127 1.00 72.00 13.3701 15.7696
COLOURED 137 1.0 85.0 13.212 14.076
WHITE 161 1 100 44.81 31.23
Valid N (listwise) 95
Figure 2 showsthat the majority of the students are African which could mean also that the 89
majority may be non-mother-tongue speakers of English. As these figures were obtained from various institutions in South Africa, it is not known exactly what percentage of the total population are EFL or ESL speakers of English. Had the respondents been asked to indicate what percentage of EFL, ESL or foreign speakers of English made up a typical class that they teach, many would not have been able to provide this information. As an educator at a tertiary institution myself, I must admit that even I cannot categorise every individual's English language proficiency in the very large classes that I teach. The home, school and social environments would have contributed to the students' proficiency in English. This is the reality that educators in the South African class are faced with, they cannot decide on the basis of the colour of a person's skin whether they are EFL, ESL or foreign speakers of English. It is important to bear this in mind as one reads through the responses regarding assessment practice in tertiary institutions in South Africa.
One hundred and twenty two (60.7%) responded that they do not test subject content orally, while seventy nine (39.3%) indicated that they do. Of the one hundred and seventy two subjects included in this survey, fifty nine subjects are assessed orally or have an oral component of assessment (Appendix D). Of these fifty nine subjects, twenty (33.9%) are in the medical field. Forms of oral assessment currently used by academics locally are indicated in Table 6 (on page 91).
Vivas, question/answer sessions and interviews are conducted by 33.3% of the population, individual assessments are conducted by 26.4% while 6.5% conduct group assessments.
65.2% did not respond to this question, most likely because they do not test content orally as indicated in the preceding question. The group assessments are made up of between two and ten students depending on the nature of the assessment. The minimum amount of time allocated per assessment was ten minutes and the maximum was one hour. The number of examiners varied from one to seven, with some indicating that the examiner and all the student's peers conducted the assessment. No uniform or common practice was noted among subjects or among respondents who use oral assessment to test content.
90
majority may be non-mother-tongue speakers of English. As these figures were obtained from various institutions in South Africa, it is not known exactly what percentage of the total population are EFL or ESL speakers of English. Had the respondents been asked to indicate what percentage of EFL, ESL or foreign speakers of English made up a typical class that they teach, many would not have been able to provide this information. As an educator at a tertiary institution myself, I must admit that evenIcannot categorise every individual's English language proficiency in the very large classes that I teach. The home, school and social environments would have contributed to the students' proficiency in English. This is the reality that educators in the South African class are faced with, they cannot decide on the basis of the colour ofa person's skin whether they are EFL, ESL or foreign speakers of English. It is important to bear this in mind as one reads through the responses regarding assessment practice in tertiary institutions in South Africa.
One hundred and twenty two (60.7%) responded that they do not test subject content orally, while seventy nine (39.3%) indicated that they do. Of the one hundred and seventy two subjects included in this survey,fifty nine subjects are assessed orally or have an oral component of assessment(Appendix Dj. Of these fifty nine subjects, twenty (33.9%) are in the medical field. Fonns of oral assessment currently used by academics locally are indicated in Table6(on page9J).
Vivas, question/answer sessions and interviews are conducted by 33.3% of the population.
individual assessments are conducted by 26.4% while 6.5% conduct group assessments.
65.2% did not respond to this question, most likely because they do not test content orally as indicated in the preceding question. The group assessments are made up of between two and ten students depending on the nature of the assessment. The minimum amount of time allocated per assessment was ten minutes and the maximum was one hour. The number of examiners varied from one to seven, with some indicating that the examiner and all the student's peers conducted the assessment. No unifonn or common practice was noted among subjects or among respondents who use oral assessment to test content.
90
Table 6: Forms of Oral Assessment Used
Form of assessment used vivas
presentations
question/answer sessions interviews
clinical oral
seminar presentations project presentations
objective structured clinical examination video and questions
patient simulation practicals
seminars
Percent usape 12.0
49.3 17.3 4.0
1.3 1.3 1.3 4.0 1.3 2.7 4.0 1.3
Table 7 (on page 92) presents the reasons for using oral assessments. Improving
communication skills is obviously a priority. This makes perfect sense as an improvement in communication skills will benefit students in all spheres of their education as well as in their life within and outside the institution. It is a pity that the questionnaire had to take the written form (the cost of travelling to every institution in the country was too high) and I could not ask the respondents to elaborate on why they wanted "to allow the students to express themselves verbally, that is, what benefits they hoped to derive from this and why the written expression did not suffice. Perhaps I have just answered my own question, that the oral assessment allows dialogue between the assessor and the student so that issues can be discussed or interrogated. The absence of "live discussion" in the written assessment denies one the latitude to probe or test for deeper understanding of concepts.
91 Table 6: Forms afOral Assessmem Used
IForm of· 110;;(>(1 I... 1I~~HJi'
Vlvas
12.0
presentations
49.3
question/answer sessions 17.3
interviews
4.0
clinical oral 1.3
seminar presentations 1.3
project presentations 1.3
objective structured clinical exam ination
4.0
video and questions 1.3
rPatient simulation 2.7
practicals
4.0
semmars 1.3
Table 7(on page 92) presents the reasons for using oral assessments. Improving
communication skills is obviously a priority. This makes perfect sense as an improvement in communication skills will benefit students in all spheres of their education as well as in their life within and outside the institution. Itis a pity that the questionnaire had to take the written form (the cost of travelling to every institution in the country was too high) and Icould not ask the respondents to elaborate on why they wanted"to allow the students laexpress themselves verbally", that is, what benefits they hoped to derive from this and why the written expression did not suffice. Perhaps 1 have just answered my own question, that the oral assessment allows dialogue between the assessor and the student so that issues can be discussed or interrogated. The absence of"live discussion" in the written assessment denies one the latitude to probe or test for deeper understanding of concepts.
91
Table 7: Reasons for using Oral Assessments
Reason for using oral assessment
to assist in the improvement of the students' communication skills to test the students' deeper understanding of concepts
to allow the students ' to express themselves verbally to test application of knowledge
students demonstrate the use, operation and understanding of an instrument oral assessment is used for missed written tests
to improve the ability to argue logically
to test confidence with which information is communicated to test student's ability to communicate science
to assess clinical abilities in assessing patients to learn the terminology/language of the subject for culturally disadvantaged students
only test orally during practicals improve students presentation skills to promote critical thinking skills
oral re-exam if student has failed written test used for borderline students only
Percent response 50
58 50 56 0.5 1.5 0.5 1.0 0.5 0.5 0.5 0.5 0.5 3.5 0.5 0.5 2.0
Allocation and Weighting of Marks
28.9% of the respondents indicated that the oral mark is combined with a written mark for each student. 6.0% said that the mark is not combined, and 65.2% did not answer this
question. There was no evidence to suggest that the oral and written assessments were used to complement or supplement each other. Also, the percentage weighting that each mode of assessment contributes to the final mark ranged from 0 to 100% as indicated in Figure 3 (overleaf) with great variations in the combination of the written and oral marks.
Table 7:Reasons/or using Oral Assessments
Reason for using oral assessment Percent response
to assist in the improvement of the students' communication skills
50
to test the students' deeper understanding of concepts
58
to allow the students' to express themselves verbally
50
to test application ofk.nowledge
56
students demonstrate the use, operation and understanding of an instrument
0.5
oral assessment isused for missed written tests
1.5
to improve the ability to argue logically
0.5
to test confidence with which information is communicated
1.0
to test student's ability to communicate science
0.5
to assess clinical abilities in assessing patients
0.5
to learn the terminology/language ofthesubject
0.5
for culturally disadvantaged students
0.5
only test orally during practicals
0.5
improve students presentation skills
3.5
to promote critical thinking skills
0.5
oral re-exam if student has fai led written test
0.5
used for borderline students only 2.0
Allocation and Weighting of Marks
28.9% of the respondents indicated that the oral mark is combined with a written mark for each student. 6.0% said that the mark is not combined, and 65.2% did not answer this
question. There was no evidence to suggest that the oral and written assessments were used to complement or supplement each other. Also, the percentage weighting that each mode of assessment contributes to the final mark ranged from 0 to 100% as indicated in Figure 3 (overleaj) with great variations in the combination of tile written and oral marks.
Figure 3: Weighting of Marks
Descriptive Statistics
Q9.1 Q9.2
Valid N (listwise) N
64 61 58
Minimum 0 20
Maximum 100 100
Mean 27.47 68.79
Std. Deviation 19.33 19.75
It would appear from the reasons given that examiners prefer to go with the tradition as far as assessments are concerned, therefore the heavy reliance on written assessments. Some responded that "I am just following what has been done in the past", which means that the process has not been revised over the years and this in itself can be problematic, as it shows that the assessment has not kept up with the changes in education and more especially with the changes at the institution.
Combining Oral and Written Marks
It is clear that those who use the oral assessments are aware of the benefits it produces. They appreciated the informal nature of the oral assessments and the fact that they enhance communication skills. Respondents agreed that both written and verbal skills are equally important and that combining the two gives a broader representative mark. It is also worth noting that respondents regard the combining of the oral and written marks as being a
"fairer" reflection of the students' knowledge and that it "ensures fairness" in a class of diverse abilities.
There was however no consensus on the division of the oral/written marks for those that combine the oral and written marks to make up a final mark for the student. The weighting given to the two modes depended on past practice in the department or at the institution or not wanting to "prejudice students with poor presentation skills". This then begs the
question: what about the bias towards the student with poor writing skills? Respondents were emphatic that written assessments are more traditionally accepted. My response to their dilemma regarding presentation skills is that if the weighting is based on or heavily based on
93 Figure3: Weighting ofMarks
Descriptive Statistics
N Minimum Maximum Mean Std. Deviation
09.1 64 0 100 27.47 19.33
09.2 61 20 100 68.79 19.75
Valid N (listwise) 58
If would appear from the reasons given that examiners prefer fo go wifh the tradition as far as assessments are concerned, therefore the heavy reliance on written assessments. Some responded that"I amjUSlfollow ing what has been done in the past", which means that the process has not been revised over the years and this in itself can be problematic, as it shows that the assessment has not kept up with the changes ineducation and more especially with the changes at the institution.
Combining Oral and Written Marks
Lt is clear that those who use the oral assessments are aware of the benefLts it produces. They appreciated the informal nature of the oral assessments and the fact that they enhance communication skills. Respondents agreed that both written and verbal skills are equally important and that combining the two gives a broader representative mark. Itis also worth noting that respondents regard the combining of the oral and written marks as being a ''fairer''reflection of the students' knowledge and that it"ensures fairness" in a class of diverse abilities.
There was however no consensus on the division of tile oral/written marks for those that combine the oral and written marks to make up a final mark for the student. The weighting given to the two modes depended on past practice in the department or at the institution or not wanting to"prejudice students wifh poor presentation skills". This then begs the
question: what about the bias towards the student with poor writing skills? Respondents were emphatic that written assessments are more traditionally accepted. My response to their dilemma regarding presentation skills is that
if
the weighting is based on or heavily based on93
the students' presentation skills, then the assessment criteria must clearly indicate that this is indeed the case. Other respondents said that "students are rewarded in proportion to the amount of time and effort spent on a project or piece of work by giving the written component a higher weighting", this implies the assumption that the written work demanded more effort and time. This ignores the fact that the student has to study and prepare intensively for the oral assessment.
Written Only Assessments
The fact that written assessments are more traditionally accepted does not mean that the tradition has to be honoured from generation to generation. Changes in the demographics of an institution must mean that teaching, learning and assessment practices must keep pace, otherwise, the education being provided will not have the desired effect or results. The increase in the number of African students into tertiary institutions most likely means that the majority of the students are not mother tongue speakers of English, yet English continues to be the medium of instruction at many tertiary institutions in South Africa. This means that teaching and learning are taking place in a language that is foreign to the student. Mangena (2002: 14) says that "research by universities has shown that language is one of the greatest barriers to success" for African students. The non mother tongue speaker of English has to cope with the lecturers' language (pronunciation, accent, expressions, examples and humour);
the textbook and notes in English, and the assessment (which is also in English). Surely, this needs to be addressed and the options examined. Teaching and testing in the students' mother tongue would immediately pose a few questions: what does a lecturer do in a multilingual classroom, where there are many different mother tongues? What would be the medium of instruction? If an indigenous language is chosen, what about the students who do not speak the chosen indigenous language?
Using English would then prove to be more viable, and, as mentioned earlier, students prefer to keep English as the medium of teaching and testing (see Pretorius 2001b: 19). Since the language issue cannot be immediately resolved without repercussions in the typical South African classroom, one needs to re-examine the methods of teaching and testing in the present context. Specific strategies need to be employed in teaching and assessing through English in a multicultural and multilingual context.
94
the students' presentation skills, then the assessment criteria must clearly indicate that this is indeed the case. Other respondents said that"studentsare rewardedinproportion10 the amounl oftime and effort spent on a project or piece ofworkbygivingthe writtencomponent a higher weighting', this implies the assumption that the written work demanded more effort and time. This ignores the fact that the student has to study and prepare intensively for the oral assessment.
WrittenOnly Assessments
The fact that written assessments are more traditionally accepted does not mean that the tradition has to be honoured from generation to generation. Changes in the demographics of an institution must mean that teaching, learning and assessment practices must keep pace, otherwise, the education being provided will not have the desired effect or results. The increase in the number of African students into tertiary institutions most likely means that the majority of the students are not mother tongue speakers of English, yet English continues to be the medium of instruction at many tertiary institutions in South Africa. This means that teaching and learning are taking place in a language that is foreign to the student. Mangena (2002: 14) says that "research by universities has shown that language is one of the greatest barriers to success" for African students. The non mother tongue speaker of Engl ish has to cope with the lecturers' language (pronunciation, accent, expressions, examples and humour);
the textbook and notes in English, and the assessment (which is also in English). Surely, this needs to be addressed and the options examined. Teaching and testing in the students' mother tongue would immediately pose a few questions: what does a lecturer do in a multilingual classroom, where there are many different mother tongues? What would be the medium of instruction? If an indigenous language is chosen, what about the students who do not speak the chosen indigenous language?
Using English would then prove to be more viable, and, as mentioned earlier, students prefer to keep English as the medium of teaching and testing(see Pretorius 200 Ib: 19). Since the language issue cannot be immediately resolved without repercussions in the typical South African classroom, one needs to re·examine the methods of teaching and testing in the present context. Specific strategies need to be employed in teaching and assessing through English in a multicultural and multilingual context.
Respondents also indicated that "we are still doing the work of the schools in the first and second year" because "students cannot express themselves adequately in English" {see Ntenza 2004). This is supported by Tshabalala who attributes the dismal matric results in the country [South Africa] on "poorly presented textbooks and poorly drafted examination papers" (Pillay 2002: 1). Surely the tertiary institutions and the government need to address this situation urgently. Concrete measures have to be put into place to equip students to succeed in education. One cannot expect the educators at tertiary level to "do the work" of the schools. Assessors have to realise that the present written-only system of assessment is not yielding the desired results and that alternate methods have to be investigated.
Oral Assessments
A total of 79.1% of the educators agreed that misunderstandings due to language are minimised and that both the examiner and the candidate benefit from the interaction during oral assessments. Contrary to the misconception that orals are unfair to the student because they get to know the content of the test, respondents indicated that from their experience with oral assessments, it allows for "assessing individual growth and variation with the same concept". It also "keeps the examiner stimulated and students from knowing the content of the test". There was general agreement that there is a need to increase oral assessments because
"students need to build communication skills" and "they need to practice skills which are important in industry". After all, as stated earlier, an improvement in communication skills in the medium of instruction, can only lead to improved performance.
Those who do not use oral assessments said, "oral assessment is not in the curriculum", "the present system does not allow it", "it is not in the institution's policy" or "it is an area not
explored'. Some said that they "would like to try it", but it would mean "too much of red-tape" in terms of changing policy. Making changes to the system of assessment at any level of education is no mean feat. Assessment does after all, affect the job and career opportunities, working life and salary earning potential of the individuals within a system of education. Exit levels are also affected and this calls into play the Quality Assurance policies of the institution. So, rather than take on the muscle of the institution, lecturers generally follow the trodden path in terms of assessment, because at least they know that what they are doing has been sanctioned by all the stakeholders and that they do not have to fill in countless
95
Respondents also indicated that"we are still doing the work
0/
the schools in the first and second year" because"students cannot express themselves adequately in English" (see Ntenza 2004). This is supported by Tshabalala who attributes the dismal matric results in the country [South Africa) on "poorly presented textbooks and poorly drafted examination papers" (Pi llay 2002: I). Surely the tertiary institutions and the government need to address this situation urgently. Concrete measures have to be put into place to equip students to succeed in education. One cannot expect the educatorsattertiary level to "do the work" of the schools. Assessors have to realise that the present written-only system of assessment is not yielding the desired results and that alternate methods have to be investigated.OralAssessments
Atotal of 79.1%of the educators agreed that misunderstandings due to language are minimised and that both the examiner and the candidate benefit from the interaction during oral assessments. Contrary to the misconception that orals are unfair to the student because they get to know the content of the test, respondents indicated that from their experience with oral assessments, it allows for"assessing individual growth and variation with the same concept". It also"keeps the examiner stimulated and studentsfrom knowing the content o/the testH. Therewas general agreement that there is a need to increase oral assessments because
"students need to build communication skills"and"they need to practice skills which are important in industry". After all, as stated earlier, an improvement in communication skills in the medium of instruction, can only lead to improved performance.
Those who do not use oral assessments said,"oral assessment is not in the curriculum", "the present system does not allow it", "itisnot in the institution's policy" or"it is an area not explored'. Some said that they"would like to try it", but it would mean"too much of
red~/ape"in terms of changing policy. Making changes to the system of assessment at any level of education is no mean feat. Assessment does after all, affect the job and career opportunities, working life and salary earning potential of the individuals within a system of education. Exit levels are also affected and this calls into play the Quality Assurance policies of the institution. So, rather than take on the muscle of the institution, lecturers generally follow the trodden path in terms of assessment, because at least they know that what they are doing has been sanctioned by all the stakeholders and that they do not have to fill in countless