• Tidak ada hasil yang ditemukan

THE PREPARATION, ADMINISTRATION AND SCORING OF ENGLISH SPEAKING TESTS AT TERTIARY EDUCATION LEVELS.

N/A
N/A
Protected

Academic year: 2017

Membagikan "THE PREPARATION, ADMINISTRATION AND SCORING OF ENGLISH SPEAKING TESTS AT TERTIARY EDUCATION LEVELS."

Copied!
32
0
0

Teks penuh

(1)

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels

Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

THE PREPARATION, ADMINISTRATION AND SCORING

OF ENGLISH SPEAKING TESTS

AT TERTIARY EDUCATION LEVELS

A THESIS

Submitted in partial fulfillment of the requirements for Magister Pendidikan degree of English Education Study Program

RAYNESA NOOR EMILIASARI

1007019

ENGLISH EDUCATION STUDY PROGRAM

SCHOOL OF POSTGRADUATE STUDIES

(2)

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels

Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

The Preparation, Administration and

Scoring of English Speaking Tests at

Tertiary Education Levels

Oleh

Raynesa Noor Emiliasari

Sebuah Tesis yang diajukan untuk memenuhi salah satu syarat memperoleh gelar Magister Pendidikan (M.Pd.) pada Program Studi Pendidikan Bahasa Inggris

Pasca Sarjana Universitas Pendidikan Indonesia

© Raynesa Noor Emiliasari Universitas Pendidikan Indonesia

November 2013

Hak Cipta dilindungi undang-undang.

(3)

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels

Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

APPROVAL SHEET

This thesis entitled

The Preparation, Administration and

Scoring of English Speaking Tests at Tertiary Education Levels

has been approved by:

Supervisor,

(4)

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

The Preparation, Administration and Scoring of English Speaking Tests at Tertiary Education Levels

Raynesa Noor Emiliasari Universitas Pendidikan Indonesia

1007019

ABSTRAK

English Speaking Test di tingkat perguruan tinggi bertujuan untuk menilai kemampuan mahasiswa dalam berbicara bahasa Inggris. Agar tes speaking tersebut valid, maka para dosen harus memperhatikan langkah-langkah perencanaannya. Penelitian ini bertujuan untuk menyelidiki bagaimana perencanaan, pelaksanaan dan penilaian tes berbicara yang dilakukan oleh para dosen di dua perguruan tinggi swasta di Jawa Barat. Penelitian ini menggunakan desain studi kasus. Data dikumpulkan melalui analisis silabus dan rubrik penilaian, observasi, wawancara semi-terstruktur, dan kuesioner terbuka. Temuan menunjukkan bahwa pada dasarnya dosen telah mampu mempersiapkan tes sesuai dengan pedoman perencanaan tes dan tujuan pembelajaran. Namun, tes yang dilaksanakan merupakan tes satu arah dimana dosen tidak mewawancarai mahasiswanya. Selain itu, para dosen menghadapi masalah dalam pelaksanaan tes seperti fasilitas ruang kelas yang tidak memadai, alokasi waktu yang terbatas dan jumlah mahasiswa yang relatif banyak. Para dosen juga mengabaikan validitas tes karena kurangnya pengetahuan mereka tentang prinsip-prinsip tes. Kriteria penilaian pun tidak mengacu pada buku-buku atau teori. Fokus penilaian para dosen berbeda. Kedua dosen fokus menilai fluency dan pronunciation. Dosen pertama menambahkan accuracy, sementara dosen yang kedua menambahkan grammar dan content. Terakhir, ditemukan bahwa para dosen hanya memberikan satu nilai akhir kepada setiap mahasiswa karena kerumitan penilaian yang mereka hadapi. Berdasarkan penemuan diatas, diharapkan para dosen speaking bisa lebih meningkatkan pemahaman mereka dalam merancang, melaksanakan dan menilai tes berbicara dalam rangka meningkatkan perencanaan dan pelaksanaan tes. Para dosen seharusnya dapat memperluas pengetahuan mereka dan mengembangkan kriteria penilaian untuk memberikan nilai yang sesuai dengan siswa .

(5)

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

The Preparation, Administration and Scoring of English Speaking Tests at Tertiary Education Levels

Raynesa Noor Emiliasari Education University of Indonesia

1007019

ABSTRACT

English speaking tests have a purpose to assess students’ performance in the speaking skill. In

order to serve the purpose, lecturers should consider several essential steps in designing well-prepared tests. This study aims to investigate how the preparation, administration, and scoring of the speaking tests, the focuses on testing the students’ speaking skill and how the lecturers score the tests. The study employed a case study design conducted in two higher education levels in West Java. The data were gathered through document analysis of the syllabi to recheck the objective of the tests and the scoring rubrics, six classroom observations, semi-structured interviews with two lecturers, and open-ended questionnaires distributed to two lecturers. This study involved two lecturers from different institutions. The findings indicated that the lecturers had been able to prepare the speaking tests, going well with the guidelines of the test preparation and the teaching objective in the institutions’ syllabus. However, the tests were in one-way communication: the lecturers did not interview the students. The lecturers faced problems in the test administration such as inadequate classroom facilities, limited time allocation and a relatively large number of students. The lecturers also ignore the validity of tests because of their insufficient knowledge about the test principles and had vague scoring criteria without referring to any books or theories. In addition, the lecturers had slightly different focuses in assessing the students’ performance. Both lecturers similarly focused on fluency and pronunciation but the first lecturer added accuracy as his focuses, while the second one included grammar and content. These focuses were selected based on their simplicity in test administration. Lastly, it was found that the lecturers gave one final score for each student because of the hassle they faced. The findings above suggest that the speaking lecturers need more upgrading in designing, administering and scoring the speaking tests in order to improve the preparation and administration of the tests. The speaking lecturers should be eager to widen their knowledge and develop the scoring criteria to give appropriate scores to the students.

(6)

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

TABLE OF CONTENTS

page

APPROVAL SHEET ii

DECLARATION iii

ACKNOWLEDGMENT iv

ABSTRACT v

TABLE OF CONTENTS vi

CHAPTER I INTRODUCTION ... 1

1.1 Background of the Study ... 1

1.2 Research Questions ... 4

1.3 Aims of the Study ... 4

1.4 Significance of the Study ... 4

1.5 Definition of Key Terms ... 5

CHAPTER II LITERATURE REVIEW ... 7

2.1 Definition of the Test ... 7

2.2 Types of the Test ... 9

2.3 The English Speaking Skill ... 11

2.4 The Speaking Test ... 12

2.5 Kinds of the Speaking Task ... 13

2.6 Preparing the Speaking Test ... 17

2.6.1 Principles of Language Testing ... 24

2.6.1.1 Practicality... 25

2.6.1.2 Reliability ... 25

2.6.1.3 Validity ... 26

2.6.1.4 Authenticity ... 28

(7)

vii Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

2.6.3 Feedback and Washback ... 30

2.6.3.1 Feedback ... 30

2.6.3.2 Washback ... 31

2.7 Administering the Speaking Tests ... 32

2.8 Scoring ... 35

2.9 Problems in Testing Speaking ... 38

2.10 Previous Studies ... 38

CHAPTER III METHODOLOGY ... 40

3.1 Research Design ... 40

3.2 Research Setting and Participants ... 40

3.3 Data Collecting Techniques ... 41

3.3.1 Interview ... 41

3.3.2 Questionnaire ... 41

3.3.3 Observation ... 42

3.3.4 Document Analysis ... 42

3.4 Data Analysis Techniques ... 42

CHAPTER IV FINDINGS AND DISCUSSIONS ... 44

4.1 Findings and Discussions... 44

4.1.1 Speaking Lecturers in Preparing the Speaking Test ... 44

4.1.1.1 Preparing the Speaking Tests ... 45

4.1.1.2 Administering the Speaking Tests ... 60

4.1.2 Speaking Lecturers Focus on while Testing the Students’ Speaking Skill ... 65

4.1.3 The Ways Speaking Lecturers do when Scoring the Speaking Tests ... 70

(8)

viii Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

CHAPTER V CONCLUSION AND RECOMMENDATION ... 76

5.1 Conclusion ... 76

5.2 Recommendation ... 79

BIBLIOGRAPHY ... 81

(9)

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

CHAPTER I

INTRODUCTION

This chapter focuses on an introductory explanation of the study presenting the background of the study, the research questions, the aims of the study, the significance of the study, and the definition of key terms.

1.1 Background of the Study

This study is concerned with investigating the way English speaking tests are prepared, administered and scored at tertiary education levels.

Since the focus of the EFL/ESL curriculum has changed to the communicative approach and the language testing has also emphasized the improvement of the speaking skill like the one in the Indonesian context, the assessment of the speaking skill has become increasingly important. The English speaking tests can be one of the procedures to assess the students’ speaking skill. Moreover, speaking tests can give beneficial backwash effect or feedback (Kitao & Kitao, 1996:2; Ur, 1996:134). Hence, there is a

clear need for speaking skill tests that really assess the students’ speaking skill accurately. The fact, speaking tests are considered difficult and complex to construct.

In the preliminary observation in Universitas Majalengka (UNMA), speaking was

offered as an independent course. One of the teaching objectives of the Speaking Course

at English Education Study Program in UNMA is improving students’ communicative

skill so they can express their thoughts and feeling using English appropriately and confidently based on its contexts.

In order to serve the teaching speaking objective, the speaking lecturers in UNMA

applied the teaching speaking techniques as well as speaking tests to assess students’

speaking performance. The test results in the scoring sheets showed remarkable scores. Most of the students got high scores in the speaking course. However, the contradictive

fact was found. The students’ speaking skill was not as good as their scores. Their

(10)

2

Based on the preliminary interviews with the speaking lecturers, it was found that they faced some problems in testing the speaking skill. Some problems that speaking lecturers commonly faced are: First, the varieties of speaking basic performances, namely: imitative, intensive, responsive, interactive and extensive speaking (Brown, 2004:141-142). Those varieties of speaking basic performances give difficulties for speaking lecturers to design the speaking tests. Second, several criteria of speaking, such as: pronunciation, grammar, fluency, comprehension, vocabulary, appropriateness, etc. make the speaking skill is difficult to be prepared, administered and scored (Suwandi & Taufiqulloh, 2009:186).

Third, designing a good test should meet the principles of language assessment: validity, reliability, practicality and authenticity (Brown, 2004:19-28). Those principles of the language assessment make speaking tests are more difficult to be prepared. Fourth, the practical problem of finding the time, the facilities and the personnel for testing speaking skill; the problem of designing productive and relevant speaking tasks; and the problem of being consistent (Knight, 1992:294) are the next problems in testing the speaking skill.

Three other challenges in testing speaking are: determining the time allotment,

selecting assessment activities and determining evaluation criteria (O’Malley & Pierce, 1996:58). Hingle and Linington in Richards and Renandya (2002:354) add besides

listening, speaking is regarded as an essential component of a diagnostic test which measures overall linguistic proficiency. Another supporting statement is from Luoma (2004:1) which confirms assessing speaking is challenging as many factors influence

one’s impression of how well someone can speak a language. Madsen (1983:147) also states a supporting statement: “Speaking is considered the most challenging of all language exams in its phases: preparing, administering and scoring”.

All the problems mentioned above influence the administration of the speaking tests. So many aspects of the speaking skill that should be assessed influence the

(11)

3

Due to the difficulties, speaking lecturers are challenged to administer speaking tests that are carefully prepared. Well-prepared tests can be as better assessing instruments than the unwell-prepared tests. Test writers must carefully plan the content of the test; specify the purpose, the information and the procedures that should be given (Norris, 2000:18). Preparing the test is like making a blueprint of a building. The architects design the buildings and select the materials to be used in the construction to meet the specific needs of the buildings because they normally know what a building is going to be used for (Fulcher & Davidson, 2009:123).

The test writers, in this case, the speaking lecturers must know their focus while testing the students. To complete the assessment process, speaking lecturers have to

score the students’ performance in order to make decision about the acceptability of each student's level of learning. Mastering the speaking test preparation, administration and scoring help the speaking lecturers as the test writers more confident and more rigorous in administering the speaking tests.

Since it is important to test the speaking skill, this study focuses on the speaking tests at tertiary education levels in terms of the preparation, administration and scoring.

The researcher thinks that the three terms are challenging and should be taken into account by the speaking lecturers. Many speaking lecturers feel uncomfortable with the variety of speaking skills which makes them difficult in preparing the speaking tests. So

(12)

4

towards the integration of oral tasks in the Spanish University Examination, not in terms of speaking test administration. Sak (2008) also conducted the research in tertiary level in terms of speaking exam, but he only investigated the validity and reliability of the exams. The researcher decided to focus her research at tertiary education level because in this education level, speaking is one of the independent subjects which require independent assessment.

Therefore, the study attempts to describe what and how speaking lecturers do in preparing, administering and scoring the speaking test results at tertiary education levels. Hopefully, the results of this research can improve the speaking lecturers’ preparation, administration and scoring of the speaking tests.

1.2 Research Questions

Based on the above explanation, the study will be focused on:

1. How do the speaking lecturers prepare and administer the speaking tests? 2. What do speaking lecturers focus on while testing the students’ speaking skill? 3. In what ways do the speaking lecturers score the tests?

1.3 Aims of the Study

The aims of the study are:

1. to describe what and how the speaking lecturers do in preparing and administering the speaking tests.

2. to identify the aspects that are focused on by the speaking lecturers during the implementation of the tests.

3. to find out the ways of scoring that are done by the speaking lecturers.

1.4 Significance of the Study

The results of the study are expected to provide its significance at least in three points of view, referring to Creswell (2003:149).

(13)

5

provide the information and give the empirical evidence to support the fact that the speaking tests are doing their job in assessing the students’ speaking skill. In addition, this study attempts to give benefits for other lecturers and teachers in other institutions who would like to administer speaking tests. They may use the results of this study as a model or description when they try to do the tests or develop the speaking tests.

This study, hopefully, can provide the information to enhance speaking lecturers’ reflection toward their methods or strategies in preparing and administering the valid and reliable speaking tests based on the learning goals cited in the syllabus.

Practically, the results of this study may help to clarify the benefits of applying the ways to prepare, administer and score the English Speaking Test so that more speaking teachers and lecturers can apply the best ways of preparing, administering and scoring the Speaking Tests in their daily teaching to improve their speaking tests.

Professionally, this study hopefully can give contributions to the speaking lecturers regarding the main foci in the administration of speaking tests which must be linked to the learning goals in the syllabus. Moreover, this study is expected to give benefit to the speaking lecturers regarding the speaking test scoring of their students so their speaking

skill can be assessed effectively, especially at the small tertiary level of education in small regions.

Finally, it is expected that the revision and the development towards the syllabus,

especially in the speaking course, can be taken by the institutions, so the students can use English in daily conversation not only as a compliance of the assessment.

1.5 Definition of Key Terms

a. English Speaking Tests. The English Speaking Tests in this study are speaking tests which are prepared, administered and scored by the speaking lecturers. The test is one

of the procedures in measuring students’ ability, knowledge and performance in

speaking.

(14)

6

c. Test Administration. After the tests are designed and prepared, they will be administered in each tertiary level of the speaking lecturers. The tests will be given to the second grade of the tertiary level students. In this study, during the administration, the focus of the observation is on the speaking skill criteria of the students.

d. Test Scoring. In this study, the lecturers will give the score to each student based on their performance.

(15)

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

CHAPTER III

METHODOLOGY

This chapter discusses a set of methodological approaches which covers the research design, the research setting and participants, the data collecting techniques and the data analyses techniques.

3.1 Research Design

This study is qualitative in nature since it focuses on how the participants experience and interact with a phenomenon at a given point in and in a particular context (Heigham & Croker, 2009:7; Richards, 2003:10). This study can be characterized as a case study because it observes and analyzes an exploration of a “bounded system” (Merriam in Heigham & Croker, 2009:68) such as an individual, program, event, school, and institutions. In this case, it investigates the issues of English speaking test at tertiary levels.

This study involves multiple sources of information including questionnaires,

observations, documents and interviews in order to gather detailed information to establish the complexity of the central phenomenon (Creswell, 2008:220), in this case, the preparation, administration and the scoring of speaking test result.

Questionnaires and open-ended interviews were applied to investigate the preparation of making the speaking tests and test results scoring. Whereas, observation was carried out to explore lecturers’ emphasis when implementing the speaking tests.

3.2 Research Setting and Participants

(16)

41

The participants involved in this study were two speaking lecturers who taught the speaking course towards sophomores. In addition, they could provide useful information towards the study, as they were familiar with the central phenomenon. They were voluntarily participated because they thought that this study would be useful to better their speaking tests. Both participants were between 40-45 years old. They were both Sundanese whose Sunda was their mother tongue and English as their foreign language.

3.3 Data Collecting Techniques

3.3.1 Interview

Interviews were conducted by the researcher by asking the participants individually or one-on-one interview with open-ended questions or semi-structured interview. So, the participants could best voice their experiences without constrained by any perspectives of the researcher’s findings and it allowed the participants to create the options for responding (Creswell, 2008:225).

The interviews were given in Indonesian. The guideline questions for the

interview were largely focused toward the three questions, regarding the test preparation, test administration and scoring procedures in the speaking class. All the interviews were recorded, transcribed and translated into English by the researcher.

The researcher also backed up the data into computer files. As Davidson says in Creswell (1998:134) we should always develop backup copies of computer files.

3.3.2 Questionnaire

In this study, questionnaires used to collect data from participants in the form of responses and comments, that is why, the researcher used open-ended questions or open-response item. As Cresswell (2008:228) says that open-ended questions have advantages for the researcher in having many responses-some short and some long-to analyze.

(17)

42

preliminary questionnaire was given to several lecturers who were not involved in the study.

3.3.3 Observation

The observations were conducted by the researcher as a complete observer who was not active in the setting, visit a site, sit quietly, record and taking notes without involved in the activities of the participants (Creswell, 2008:222). The observation was carried out to observe what do the speaking lecturers focus on while implementing the speaking test. The information included portraits and activities of participants, lecturers and students’ interaction during the implementation of the tests. The field notes were used to record the data.

3.3.4 Document Analysis

The document analysis was done in order to find out the scoring rubrics that were used by the respondents. It was done to recheck the scoring rubrics and the final score form that is made by the respondents. The scoring rubrics were requested from

the respondents’ file and was copied by the researcher. The syllabus’ institution was also analyzed to recheck the teaching objective of the speaking course. The syllabus was copied from the respondents’ document.

3.4 Data Analysis Techniques

The collected data were categorized, organized, transcribed and redacted from the discursive data since there were large amounts of information during the study. The first stage in analyzing and interpreting the data deals with the analysis and interpretations of the respondents’ draft of interviews and questionnaires. While the second stage deals with analysis and interpretations of the observation.

The analysis continued with coding the data for segmenting and labeling or categorizing text to form descriptions and broad themes in the data (Creswell, 2008:251). Every coding was done as soon as the data had been gathered.

(18)

43

The findings were reported in narrative discussion then they were interpreted by reviewing the major findings and how the research questions were answered.

To validate the findings, the researcher used triangulation to check the accuracy or credibility. This study employed three methods of collecting data: questionnaires, open-ended interviews and observations. They were used to ensure the credibility. Questionnaires and interviews were carried out to investigate how the speaking tests were prepared, how the results were scored, and what was focused during the tests. The observation notes were used to review the process and the results of the speaking tests implementation.

(19)

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

CHAPTER V

CONCLUSION AND RECOMMENDATION

This chapter presents conclusions and recommendations. The conclusions of the study are drawn based on the data analysis discussed in Chapter Four. Meanwhile the recommendations are provided to give information and guidance to conduct further related research concerning the same issue. Lastly, the recommendations are addressed to those who will get involved in the development and the implementation of the speaking tests such as teachers, lecturers and institutions.

5.1 Conclusion

This study investigated how Speaking English Course lecturers administered the speaking test. Besides that, it also investigated what they focused on during the implementation of the test and how they scored the speaking test result. Based on the data gained from the interviews, questionnaires, document analysis and classroom

observations, it was concluded that, in the preparation of the speaking test, the lecturers had been able to show the ability to design the speaking test in line with the teaching objective of the speaking course stated in the syllabus from the institution. The lecturers also had paid attention to the test format, the scoring procedure and the proper materials based on the teaching objective of the speaking course written in the syllabus.

In terms of the kinds of speaking tests, the lecturers designed the achievement test. The reasons were: First, the objective of the test related to the classroom lessons, units, or even a total curriculum within a particular time frame. The tests were offered after a course has covered the objectives. Second, the tests were conducted to find out which learners had accomplished the teaching objective of the course. Third, the tests were directly related to a known syllabi and they were attempted to examine learners’ achievement with specific reference to the objectives of a particular course. Fourth, the tests’ content was generally based on the course syllabus. And fifth, the tests were intended to measure the students’ progress.

(20)

76

between the students and the lecturers or between the students and the students. R#1 conducted “the oral presentation task”, meanwhile R#2 conducted “the monologue speaking task”, the picture description task and the role-play task.

The lecturers did not interview the students and there was no face-to-face communication. Students’ responses were not impromptu since they prepared what they would present. In addition, the communication was in one-way communication.

Because of the problem in time allotment, the lecturers did not group the students based on their language proficiency levels. They also did not give the students different tests based on the difficulty levels of the test.

From the observations and the interviews, it could be concluded that the lecturers had been able to consider the test content (topic), test format (individual or group) and time duration based on the objective of the course.

Related to the anxiety of the students, the lecturers had shown their awareness in considering the students’ anxiety by giving them the information about the topic before the tests were conducted. So, the students could prepare the presentation at home. The information that the lecturers gave to the students consisted of the test schedule, topics, the time duration, scoring criteria, rules and punishment. The lecturers also considered the seating arrangement of the students to avoid students’ anxiety.

In terms of the testing principles, the lecturers were familiar with the practicality and validity principles only. However, they were not too familiar with the other principles such as authenticity and reliability principles.

The result from the questionnaires showed that the lecturers gave oral feedback (pronunciation correction) and score as their feedback to the students. In terms of washback, the lecturers had been able to use their experience in testing to give lecturers information about the strengths and weaknesses of the test for better future in assessing the students.

However, the lecturers faced a problem in providing the appropriate classroom environment while the speaking test was conducted. The institutions had not provided the representative classroom for the students and lecturers.

(21)

77

performance. They only used the observation sheet to write down the students’ mistakes during the students’presentation. Apart from that, the lecturers administered the speaking test in accordance with the time scheduled.

For the second research question, “What do the speaking lecturers focus on while testing the students’ speaking skill?”, the data from the questionnaires and classroom observations indicated that the lecturers had their own criteria in assessing the students’ performance.

R#1 focused on the fluency, pronunciation and accuracy. Meanwhile, R#2 focused on the fluency, pronunciation, grammar and content. The lecturers focused on the fluency based on the amount of speech, the duration of pauses, incomplete thoughts, speed of talking, students’ hesitation and silence produced by the students.

The lecturers focused on the pronunciation based on the stress, rhythm, intonation, articulation and vocalization that were pronounced by the students during their presentation.

R#2 focused on grammar based on the accuracy or inaccuracy of the basic language structures that students used during their speech. Meanwhile, R#1 did not focus on the grammar because spoken grammar was different from written grammar and it made spoken grammar difficult to assess.

R#1 focused on the accuracy based on the smoothness of students’ speech. The R#2 focused on the content because he provided the students with the topic to be presented, so he should assess the presentation content whether or not the content was relevant to the topic.

The result from interviews and document analysis indicated that the lecturers still faced difficulty in scoring the speaking test result. Several mistakes that the students made during the speaking test became one of the references in giving the score to the students. Other references were students’ attendance, students’ speaking activity and students’ attitude in the classroom during the teaching and learning process.

(22)

78

into the fixed score (A-D). Then, they informed the score to the institutions. After that, the institutions would announce the fixed score to the students.

However, there was a weakness in the scoring criteria used by the lecturers. They did not assess the students’ performance based on each criterion but they gave the score to the students based on students’ single performance. In other words, whether the lecturers had several criteria in assessing students’ presentation, they only wrote one numerical scoring in their scoring rubric. It was appropriate with the holistic scoring type.

The lecturers also did not give further clear explanation about the scoring criteria. They only used the general scoring criteria, such as fluency, pronunciation, grammar, accuracy and content. They did not provide further explanation or further sub criteria of each criterion.

From several conclusions above, it can be concluded that in preparing and administering the speaking test, the speaking lecturers had been able to design and administer the test based on the learning objective of the speaking course written in the syllabus from the institution. However, there were still some weaknesses in the implementation of the test, they are: the speaking tests that were conducted did not reflect the authentic interaction between the students and the lecturers; the lecturers still were not familiar with several testing principles; the representative classroom

and large classes became problems faced by the lecturers; and, the lecturers did not write further explanation about each criterion of scoring criteria.

5.2 Recommendation

In line with the topic under discussion which was about the preparation and administration of the speaking test as well as the way the speaking lecturers score the test and the findings elaborated above, the following recommendations are worth considering.

Several ways that had been done by the speaking lecturers in designing the speaking test could become recommendations for other speaking lecturers in designing the speaking tests.

(23)

79

select the task which involves students’ interaction with lecturers or other students. English speaking lecturers at the institutions and regency level should provide lecturers training and development to guide speaking lecturers in designing, developing and implementing the interactive speaking tests.

In the implementation of the speaking tests covering the representative classroom and avoiding the distraction, the lecturers should consult the facilities and infrastructures of the classroom to the institutions.

In order to avoid the injustice feeling and boredom of the students, the lecturers should provide the different kinds of test based on students’ language proficiency. It is better for the lecturers not to equalize students’ speaking ability by giving them the same tests.

In terms of scoring criteria, it is better for the lecturers to make further and clear explanation about each criterion in order to avoid uncertainty in the assessment. In response to this, the lecturers should eager to enlarge their knowledge and developing the scoring criteria or scoring rubric by reading various books related to the scoring criteria, adopting the scoring criteria from the experts, attending the training, seminar or workshops that discuss the scoring criteria in assessing speaking skill.

This study was conducted in small tertiary levels of small regency and within the

limitation of time. Therefore, it is recommended that the other researchers who are interested in this issue conduct the research in big and famous tertiary levels of state universities where situation and the condition of the students, lecturers and the institutions are different with the small private tertiary levels of institutions.

(24)

80

(25)

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

BIBLIOGRAPHY

A.S.A.P. 1996. (Alternative Strategies for Assessing Performance). Fairfax County Public Schools Publication.

Alderson, J. C. & Wall, D. 1993. ‘Does Washback Exist?’. Applied Linguistics (14) p. 116– 29. In Pan, Yi-Ching. 2009. A Review of Washback and Its Pedagogical Implications. VNU Journal of Science, Foreign Languages 25 pp. 257-263. The University of Melbourne, Australia.

Alderson, J. C. & Clapham, C. 1995. Language Test Construction and Evaluation. Cambridge University Press.

Allison, D. 1999. Language Testing and Evaluation: An Introductory Course. Singapore University Press.

Ardovino, J. et al. 2000. Multiple Measures: Accurate Ways to Assess Student Achievement. Corwin Press.

Bachman, L.F. 1990. Fundamental Considerations in Language Testing. Oxford, U.K.: Oxford University Press.

Bachman, L. F. & Palmer, A. S. 1996. Language Testing in Practice: Designing and Developing Useful Language Tests. Oxford University Press.

Baker, C. & Jones, S. P. 1998. Encyclopedia of Bilingualism and Bilingual Education. Clevedon, Avon, UK: Mulitilingual Matters.

Bernstein, B. 2000. Pedagogy, Symbolic Control and Identity: Theory, Research, Critique. (Revised Edition). Lanham: Rowman & Littlefield Publishers. In Brown, H. D. 2004. Language Assessment Principles and Classroom Practices. Longman.

Black, P. & William, D. 1998. Inside the Black Box:Raising Standards through Classroom Assessment. In Fleming, M. P. & Stevens, D. 2010. English Teaching in the Secondary School: Linking Theory and Practice. Taylor & Francis.

Bowen, D. & Bowen, M. 2009. Interpreting: Yesterday, Today, and Tomorrow. John Benjamins Publishing Company.

Brown, H. D. 2001. Teaching by Principles: An Interactive Approach to Language Pedagogy; Second Edition. Longman.

Brown, H. D. 2004. Language Assessment Principles and Classroom Practices. Longman.

Buck, G. 1988. Testing Listening Comprehension in Japanese University Entrance Examinations. JALT Journal 10 (1), pp. 32-42.

(26)

82

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

CALLA Handbook. 2001. In the Document of Public School of North Carolina. p. 105

Canale, M. 1984. Consideration in the Testing of Reading and Listening Proficiency. Foreign Language Annals, 17, 349-357. In Brown, H. D. 2004. Language Assessment Principles and Classroom Practices. Longman.

Canale, M. & Swain, M. 1980. Theoretical Bases of Communicative Approaches to Second Language Teaching and Testing. Applied Linguistics 1. p.1-47. In Mendelsohn, D.J. 1989. Testing Should Reflect Teaching. Tesl Canada Journal Revue Tesl Du Canada. Vol. 7. No. I. November 1989.

Carroll, B. J. 1985. Second Language Performance Testing forUniversity and Profesional Contexts.In Hauptman. 1985. p. 89-110. In Mendelsohn, D.J. 1989. Testing Should Reflect Teaching. Tesl Canada Journal Revue Tesl Du Canada. Vol. 7. No. I. November 1989.

Cascallar, E. & Bernstein, J. 2000. The Assessment of Second Language Learning as a Function of Native Language Difficulty Measured by an Automated Spoken English

Test”, Paper presented at the American Assoc. Applied Linguistics Annual Meeting, Vancouver, Canada, March. In Brown, H. D. 2004. Language Assessment Principles and Classroom Practices. Longman.

Cheng-Jun, W. 2006. Designing Communicative Tasks for College English Course. Retrieved from www. asian-elf-journal.com.

Church, S. M. 2005. The Principal Difference: Key Issues in School Leadership and How to Deal with Them Successfully. Pembroke Publishers Limited.

Creswell, W. J. 2003. Research Design: Qualitative, Quantitative and Mixed Methods Approaches. Second Edition. Sage Publications. p.149.

Creswell, W. J. 2008. Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research Third Edition. Pearson International Edition.

Davidson, F. 1996. Principles of Statistical Data Handling. Sage Publications. In Creswell, W. J. 2008. Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research Third Edition. Pearson International Edition.

DeJong, N.H & Wempe, T. 2007. Praat Script to Detect Syllable Nuclei and Measure Speech Rate Automatically. Behavior Research Methods. Volume 41, Number 2, pp. 385-390, DOI: 10.3758/BRM.41.2.385.

Dick, W. & Hagerty, N. 1971. Topics Measurement: reliability and validity. McGraw-Hill. In Sak, Gonca. 2008. An Investigation of the Validity and Reliability of the Speaking Exam at a Turkish University. A Thesis. Middle East Technical University.

Diller, D. 2007. Making the Most of Small Groups: Differentiation for All. Stenhouse Publishers.

(27)

83

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

Ekbatani, G. 2008. Measurement and Evaluation in Post-secondary ESL. Taylor & Francis.

Fleming, M. P. & Stevens, D. 2010. English Teaching in the Secondary School: Linking Theory and Practice. Taylor & Francis.

Foot, M.C. 1999. Relaxing in Pairs. EFL Journal, 53 (1), 36-41. In Sak, Gonca. 2008. An Investigation of the Validity and Reliability of the Speaking Exam at a Turkish University. A Thesis. Middle East Technical University.

Fulcher, G. & Davidson, F. 2007. Language Testing and Assessment: An Advanced Resource Book. Taylor & Francis.

Fulcher, G. & Davidson, F. 2009. Test Architecture, Test Retrofit. Language Testing (26) 1, pp. 123-144. Downloaded from ltj.sagepub.com at Universitas Pendidikan Indonesia on July, 7, 2012.

Fulcher, G. 1996. Does Thick Description Lead to Smart Tests? A Data-Based Approach to Rating Scale Construction. Language Testing, v13 n2 pp. 208-38.

Gates, S. 1995. Exploiting washback from standardized tests. In J. D. Brown & S. O.

Gower, R. et al. 1995. Teaching Practice Handbook. Oxford: Macmillan Education.

Green, D. Z. 1985. Developing Measures of Communicative Proficiency: A Test for French Immersion Students in Grades 9 and 10. In Mendelsohn, D.J. 1989. Testing Should Reflect Teaching. Tesl Canada Journal Revue Tesl Du Canada. Vol. 7. No. I. November 1989.

Gronlund, N.E. & Linn, R.L. 1990. Measurement and Evaluation in Teaching. New York: Macmillan. In Rukundo, A & Magambo, J. 2010. Effetive Test Administrations in Schools: Principles & Good Practices for Test Administrators in Uganda. African Journal of Teacher Education. Vol 1 No. 1 pp. 166-173.

Gronlund, N. E. 1998. Assessment of Students Achievement. Sixth Edition. Boston: Allyn and Bacon. In Brown, H. D. 2004. Language Assessment Principles and Classroom Practices. Longman.

Guskey, T.R & Bailey, J.M. 2001. Developing Grading and Reporting Systems for Student Learning. Corwin Press.

Harlow, L. L. & Caminero, R. 1990. Oral Testing of Beginning Language Students at Large Universities: Is It Worth the Trouble? Foreign Language Annals, 23 (6), pp. 489-501.

Harmer J. 2001. The Practice of English Language Teaching. London: Longman.

(28)

84

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

Hayati, A. M., & Askari, E. 2008. Testing Oral Language Proficiency of University EFL Students. Asian-efl-journal. Volume 30. Professional teaching Articles. August 2008. Article 2.

Hingle, I. & Linington, V. ----. English Proficiency Test: The Oral Component of a Primary School. In Richards, J. C. & Renandya, W. A. 2002. Methodology in Language Teaching: An Anthology of Current Practice. Chapter 35. p. 354-360. Cambridge University Press.

Hughes, A. 1989. Testing for Language Teachers. Cambridge: Cambridge University Press.

Hughes, A. 1990. Testing for Language Teachers. Glasgow: Cambridge University Press. In Sak, Gonca. 2008. An Investigation of the Validity and Reliability of the Speaking Exam at a Turkish University. A Thesis. Middle East Technical University.

Hughes, A. 2003. Testing for Language Teachers. Second Edition. Cambridge Language Teaching Library.

Jones, R. L. 1977. "Testing: A Vital Connection". In Philips, J. K. (ed), The Language Connection, 237-265. Skokie, 3: National Textbook Co. In Sook, Kim Hyun. 2003. The Types of Speaking Assessment Tasks Used by Korean Junior Secondary School English Teacher. Asian-EFL-Journal. December 2003.

Kitao, S.K & Kitao, K. 1996. Testing Speaking. Document Resume TM 025 215. No. ED 398 261. Retrieved March 7th 2012. p. 2.

Knight, B. 1992. Assessing Speaking Skills: A Workshop for Teacher Development. ELT Journal Volume 46/3 July 1992 © Oxford University Press.

Kormos, J. & Denes, M. 2004. 4 lk986tc6Exploring Measures and Perceptions of Fluency in the Speech of Second Language Learners. System, 32(2), pp. 145 - 164. In McGregor, L. A. 2007. An Examination of Comprehensibility in a High Stakes Oral Proficiency Assessment for Prospective International Teaching Assistants. A Dissertation.

Retrieved from

http://repositories.lib.utexas.edu/bitstream/handle/2152/15862/mcgregor148097.pdf/sse quenc=2.

Kormos, J. 2006. Speech Production and Second Language Acquisition. Lawrence Erlbaum Associaties. In McGregor, L. A. 2007. An Examination of Comprehensibility in a High Stakes Oral Proficiency Assessment for Prospective International Teaching Assistants.

(29)

85

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

http://repositories.lib.utexas.edu/bitstream/handle/2152/15862/mcgregor148097.pdf/sse quenc=2.

Kozlowska, J. S., et al. 2005. Assessing Assessment Methods–on the Reliability of

Pronunciation Tests in EFL. Retrieved from

http://www.phon.ucl.ac.uk/home/johnm/ptlc2005/pdf/ptlcp37.pdf

Laborda, J.G. & Alvarez, M.F. 2011. Teachers' Interest in a Computer EFL University Entrance Examination. Article first published online: 20 OCT 2011. British Journal of Educational Technology. Volume 42, Issue 6, pages E136–E140, November 2011.

Lado, R. 1961. Language Testing: The Construction and Use of Foreign Language Tests: A

Teacher’s Book. New York: McGraw-Hill Book Company.

Lee, Sujin. 2010. Current Practice of Classroom Speaking Assessment in Secondary Schools in South Korea. A Thesis. The University of Queensland: Australia.

Lewin, L & Shoemaker, B. J. 1998. Great Performances: Creating Classroom-Based Assessment Tasks. Association for Supervision and Curriculum Development (ASCD).

Luoma, S. 2004. Assessing Speaking. Cambridge University Press.

Madsen, H.S. 1983. Techniques in Testing. Oxford: Oxford University Press.

McGregor, L. A. 2007. An Examination of Comprehensibility in a High Stakes Oral Proficiency Assessment for Prospective International Teaching Assistants. A

Dissertation. Retrieved from

http://repositories.lib.utexas.edu/bitstream/handle/2152/15862/mcgregorl48097.pdf?seq uence=2.

Mehrens, W.A., and Lehmann, I.J. 1991. Measurement and Evaluation in Education and Psychology, (4th edn) Holt, Rinehart and Winston Inc: Orlando, Fl.

Mendelsohn, D.J. 1989. Testing Should Reflect Teaching. Tesl Canada Journal Revue TESL Du Canada. Vol. 7. No. I. November 1989.

Merriam. 1988. Case Study in Education: A Qualitative Approach. San Francisco: Jossey-Bass. In Heigham, J. & Croker, R. 2009. Qualitative Research in Applied Linguistics: A Practical Introduction. Palgrave Macmillan.

Nakamura, Y. & Valens, M. 2001. Teaching and Testing Oral Communication Skills. Journal of Humanities and Natural Sciences,3, 43-53. In Mianto, E. ----. Using Rubric to Test

(30)

86

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

http://www.academia.edu/2205164/USING_RUBRICS_TO_TEST_STUDENTS_PER FORMANCE_IN_SPEAKING.

Newble, D. & Cannon, R. A. 1991. A Handbook for Teachers in Universities and Colleges: A Guide to Improving Teaching Methods. Kogan Page. In Thackwray, B. 1997. Effective Evaluation of Training and Development in Higher Education. British Library Cataloguing in Publication Data.

Nitko, A. J. 2004. Educational Assessments of Students. Englewood Cliffs, NJ: Prentice Hall.

Norris, J. M. 2000. Purposeful Language Assessment. English Teaching Forum, 38 (1), pp. 18-23.

North, B. 1995. The Development of a Common Framework Scale of Descriptors of Language Proficiency Based on a Theory of Measurement. System, 23. pp. 445-65.

O’Malley, J. M. & Pierce, V. L. 1996. Authentic Assessment for English Language Learners. Addison Wesley Publishing Comp Inc.

Ory, J.C. & Ryan, K. E. 1993. Tips for Improving Testing and Grading. SAGE.

Pan, Yi-Ching. 2009. A Review of Washback and Its Pedagogical Implications. VNU Journal of Science, Foreign Languages 25 pp. 257-263. The University of Melbourne, Australia.

Pan, Yi-Chun & Pan, Yi-Ching. 2011. Conducting Speaking Tests for Learners of English as a Foreign Language. The International Journal of Educational and Psychological Assessment. January 2011, Vol. 6(2).

Perren, G. E. 1968. Testing Spoken Language: Some Unsolved Problems. In Davies, Alan (Ed.), Language Testing Symposium: a psycholinguistic approach. (p. 107-132). London: Oxford University Press.

Richards, J. C. & Lockhart, C. 1999. Reflective Teaching in Second Language Classrooms. New York: Cambridge University Press.

Richards, J. C. & Renandya, W. A. 2002. Methodology in Language Teaching: An Anthology of Current Practice. Cambridge University Press.

Richards, K. 2003. Qualitative Inquiry in TESOL . NY: Palgrave.

Riggenbach, H. 2000. Prespective on Fluency. University of Michigan Press. In McGregor, L. A. 2007. An Examination of Comprehensibility in a High Stakes Oral Proficiency Assessment for Prospective International Teaching Assistants. A Dissertation.

Retrieved from

http://repositories.lib.utexas.edu/bitstream/handle/2152/15862/mcgregorl48097.pdf?seq uence=2.

Riihimaki, J. 2009. Assessment of Oral Skills in Upper Secondary Schools in Finland:

(31)

87

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

Rukundo, A & Magambo, J. 2010. Effetive Test Administrations in Schools: Principles & Good Practices for Test Administrators in Uganda. African Journal of Teacher Education. Vol 1 No. 1 pp. 166-173.

Sak, Gonca. 2008. An Investigation of the Validity and Reliability of the Speaking Exam at a Turkish University. A Thesis. Middle East Technical University.

Saraswati. 2004. English Language Teaching: Principles & Practice. Orient Blackswan.

Scrivener, J. 1998. Learning Teaching. Oxford: Macmillan Publisher Limited.

Sohn, Ho-min. 2006. Korean Language in Culture and Society. University of Hawaii Press.

Sook, Kim Hyun. 2003. The Types of Speaking Assessment Tasks Used by Korean Junior Secondary School English Teacher. Asian-EFL-Journal. December 2003.

Stiggins, R. J. et al. 2004. Classroom Assessment for Student Learning: Doing it Right, Using it Well. Portland, or: ETS. Assessment Training Institute.

Suwandi & Taufiqulloh. 2009. Designing Speaking Test. Eksplanasi Volume 4 Nomor 8

Edisi Oktober 2009. Retrieved from

.kopertis .or.id journal inde .php eks article do nload . p. 186.

Swain, M. 1984. Teaching and Testing Communicatively. TESL Talk 15, 1 and 2. p. 7-18. In Mendelsohn, D.J. 1989. Testing Should Reflect Teaching. Tesl Canada Journal Revue Tesl Du Canada. Vol. 7. No. I. November 1989.

Tannouri, F. 2009. Developing a Scoring Rubric for a Course-specific Pair Work Telephoning Speaking Assessment Using Intuitive and Data-based Methods. International Journal of Pedagogies and Learning July 2009, Vol. 5, No. 1: 72–102.

Taylor, L. 2011. Examining Speaking: Research and Practice in Assessing Second Language Speaking. Cambridge University Press.

Thackwray, B. 1997. Effective Evaluation of Training and Development in Higher Education. British Library Cataloguing in Publication Data.

The Document of Public School of North Carolina. 2001.

Tomei, J. 1998. Oral Proficiency Test in the English Speaking Course in Hokkaido University. J. Higher Education (Hokkaido Univ). No. 3. 1998.

Townshend, M., et al. 1998. Self-Concept and Anxiety in University Students Studying Social Science Statistics Within a Co-Operative Learning Structure. Educational Psychology 18 (1), 41-53. In Brown, H. D. 2004. Language Assessment Principles and Classroom Practices. Longman.

(32)

88

Raynesa Noor Emiliasari, 2013

The Preparation,Administration And Scoring Of English Speaking Tests At Tertiary Education Levels Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu

Underhill, N. 2000. Testing Spoken Language: A Handbook of Oral Testing Techniques. Cambridge: Cambridge University Press.

Ur, P. 1988. Grammar Practice Activities: A Practical Guide for Teachers. Cambridge University Press

Ur, P. 1996. A Course in Language Teaching. Cambridge: Cambridge University Press. p. 134.

Valkonen, T. 2003. Puheviestintä taitojen arviointi. Näkökulmia lukiolaisten esiintymis- jaryhmätaitoihin. Jyväskylä: Jyväskylä University Press. In Riihimaki, J. 2009. Assessment of Oral Skills in Upper Secondary Schools in Finland: Teacher’s View. A Thesis. University of Jyvaskyla.

Weir, C. J. 1990. Communicative Language Testing. London: Prentice-Hall. p. 1.

Weir, C. J. 1990. Communicative Language Testing. London: Prentice-Hall. In Sak, Gonca. 2008. An Investigation of the Validity and Reliability of the Speaking Exam at a Turkish University. A Thesis. Middle East Technical University.

Weir, C. J. 1990. Communicative Language Testing. London: Prentice-Hall. In Taylor, L. 2011. Examining Speaking: Research and Practice in Assessing Second Language Speaking. Cambridge University Press.

Weir, C. J. 1993. Understanding and Developing Language Tests. Hemel Hempstead. UK: Prentice-Hall International. In Allison, D. 1999. Language Testing and Evaluation: An Introductory Course. Singapore University Press.

Wilson, J. A. 1997. A Program to Develop the Listening and Speaking Skills of Children in a First Grade Classroom. In C. Smith, (Ed). Skills Students Use when Speaking and Listening. Retrieved March 7, 2012, from http://eric.indiana.edu.

Referensi

Dokumen terkait

Peraturan Menteri Riset, Teknologi, dan Pendidikan Tinggi Nomor 126 Tahun 2016 tentang Penerimaan Mahasiswa Baru Program Sarjana pada Perguruan Tinggi Negeri (Berita

PENGARUH PENERAPAN MODEL PEMBELAJARAN BERBASIS PROYEK DALAM PENDIDIKAN PANCASILA DAN KEWARGANEGARAAN (PPKN) TERHADAP PENGEMBANGAN KETERAMPILAN KEWARGANEGARAAN (CIVIC SKILL) SISWA

[r]

Dari analisis kuisioner yang telah dibagikan ke peserta, akan didapatkan kesimpulan berdasarkan pada nilai skor dari skala yang ditentukan bahwa peserta pelatihan

[r]

  Keywords: Nilai­nilai Islam, Internalisasi, Akhlaq Karimah  ABSTRAK 

Pada hari ini Jum’at Tanggal Dua Puluh Sembilan Juli Tahun 2016 (29/07/2016) Kelompok Kerja (Pokja) Pengadaan Barang/ Jasa Konstruksi Bidang Bina Marga

Pemerintah, diharapkan Tugas Akhir ini bisa membantu pemerintah dalam pemecahan masalah untuk kebutuhan rumah dan tata kota yang baik untuk masyarakat, khususnya di Kecamatan