• Tidak ada hasil yang ditemukan

Reading Speed Test

Dalam dokumen By Rieka Utami NIM 1204272 - PINGPDF.COM (Halaman 96-102)

CHAPTER III RESEARCH METHOD

C. Research Instruments

2. Reading Speed Test

Table 6. The General Guidelines of Reliability Coefficients

Reliability Interpretation .90 and above Excellent reliability; at the level of the best standardized tests.

.80 - .90 Very good for a classroom test.

.70 - .80 Good for a classroom test; in the range of most. There are probably a few items which could be improved.

.60 - .70 Somewhat low but enough. This test needs to be supplemented by other measures or there are probably some items which could be improved.

.50 - .60 Suggests need for revision of test, unless it is quite short (ten or fewer items). The test definitely needs to be supplemented by other measures (e.g., more tests) for grading.

.50 Questionable reliability. This test should not contribute heavily to the course grade, and it needs revision.

Source: Office of Education Assessment, University of Washington (2005:4) Based on the result reliability analysis of questionnaire’s try out, this questionnaire was categorized reliable because the result of reliable analysis is 0.850 (see appendix 11 page 170). It means that, this questionnaire is very high reliable. It can be categorized as a very good instrument for a classroom test.

Overall, after doing the try out for reading speed test there were some items of the questions that could be used in this research due to its validity and there was no items deleted due to its reliability. There were nine items deleted, they were: number 33, 4, 5, 8, 11, 13, 15, 17, and 21. So, there will be only 14 items of reading attitude questionnaire in this research (see appendix 12 page 171).

find out the students reading speed effectiveness after the treatment. O’Malley (1996:98) suggests that “it will be better to use the texts with comprehending questions to assess reading skill”. In this case the researcher used multiple choice test. This test was used to find out the effect of the students’ reading speed in control and experiment classes. The test was designed based on the indicator of reading in school based curriculum of Senior High School Grade XI at SMAN 1 Curup Selatan. The indicators of the test can be seen as follow:

Table 7. Indicators of Reading Test of Hortatory Exposition Text

Indicators Sub-Indicators Number of Items Total Number of

Questions

Reading Comprehension

of English Exposition Text

Topic of the Text Thesis 1, 5, 9, 17, 19, 25,33

7

Details information

Arguments 2, 6, 10, 11, 18, 20, 30, 34, 37, 39

10

Recommendation 13, 15, 16, 24, 29, 40

6

The meaning of vocabulary

The whole of text 7, 12, 27, 32, 38 5

References The whole of text 8, 26, 28 3 Inferences The whole of text 3, 4, 14, 21, 22,

23, 31, 35, 36

9

As the indicators of reading speed test above, there were 40 items of questions for the test. The questions had been made to cover students’ reading comprehension skills component, as finding main idea, mastering vocabulary,

making inferences, locating references of the text, and scanning for details information. Moreover, the questions also related to the text comprehension about Hortatory exposition text types. There were five hortatory exposition texts used in the reading speed test. Those texts were taken from several resources in the internet (see appendix 21 page 190).

a. Validity of the Test

According to Bachman (1990: 160), validity refers to “the degree of evidence supports the inferences that are made from the scores”; the inferences regarding specific uses of a test are validated. Hughes (1989: 22) also explains that a test is said to be valid if it measures accurately what it is intended to measure. It could be concluded that the validity of the test is the aims of the test designed measure what should be measured in learning and teaching process. In this research, the validity testing involved its content, construct and item validity.

Bachman (1990:160) states that content validity is the test faithfully reflects the syllabus or instructional program on which it is based. As in this research, the items of the test made through the guidance of the specification of reading test. So, it could be said that the test instrument fulfill the content validity.

However, it still needs some experts’ judgments to analyze and evaluate the instrument. Therefore, the test had been given to some experts in this subject, they were one lecture of English department, a Senior High School English Supervisor and a Senior High School English teacher. They were (Mr. Prihartono M,Pd (Lecture of English Department in STAIN Curup), Elva Novianti, M,Pd

(Supervisor of English subject in Educational Department Curup) and Evi Susanti, S.Pd (English Teacher of SMA 1 Curup Selatan). The detail can be seen in appendix 20 page 187 and appendix 44 page 280.

While, construct validity means that “the test is an accurate reflection of underlying theory of what it is supposed to measure” (Bachman, 1990:161). It means that the test measures all of affective aspects on learning. Whereas, in item validity an item is called valid if it is supported the total score. It is affected by item score. If the item score is parallel with the total score, the validity of the test will be high. This parallelism means correlation.

So, in order to find out the construct validity and item validity of test, the researcher had given the try-out to other students. Try-out had been conducted to the students of XI grade (social group) at SMAN 1 Curup Selatan (see appendix 24 page 203). The researcher selected this class because the students reading level category is not really difference and the teacher who was taught them was the same with the sample of this research. In finding the validity of try out, the researcher analyzed the data using Pearson Product Moment (Correlation Matrix) in SPSS. The formula of Pearson Product Moment (Correlation Matrix) is as follow:

(Arikunto, 2008:72) Note:

rxy : Coefficient correlation between x and y variables N : Numbers of students

( )( )

{

( )

} {

( ) }

= 2 2 2 2

y N

x N

y x xy

r N

x y

xy  

∑x : Sum of x

∑y : Sum of y

∑x2 : Square of x

∑y2 : Square of y

∑xy : Total scores of cross product x and y

After try out of reading test was done, the result was analyzed. It was found that some of items of test deleted due to its validity, they were: number 3, 8, 12, 24, 26, 34, 38 and 39. It could be concluded that there were eight items was deleted because these items were invalid (see appendix 25 page 205).

The researcher also analyzed the items in order to get the index of discrimination and difficulty of the test items. Items analysis was used to find out:

(1) how easy or difficult an item is (index of difficulty), and (2) how well it distinguishes the better students to the poorer one (index of discrimination).

According to Heaton (1988:165) the accepted item should have Facility Value (F.V) or Difficulty Index between 0.3 – 0.8 and the Discrimination Index (D.I) more than 0.2 (178).

Table 8. Difficulty Index and Discrimination Index of Test Items

Index of difficulty

Scales Item evaluation

≤ 0.30

> 0.30 <

0.80

≥ 0.80

High or Difficult Medium or Moderate Low or Easy

Index of discrimination

0.40 and up 0.30 to 0.39 0.20 to 0.29 0.19 or less

Very good items.

Reasonably good but possibly subject to improvement.

Marginal items, subject to improvement.

Poor items, to be rejected or improved by revision The difficulty Index formula as follow:

    N

FV = R  

Note :

FV: Difficulty index R : Right number

(Heaton,1988:178)

The discrimination power formula is as follow:

After try out of reading test was done, the items of answers were analyzed.

Due to its difficulty index, there were five items deleted because they were too easy and too difficult. They were items number 24, 26, 34, 38 and 39 (See appendix 27 page 208). While, from the discrimination power analysis, there were also five items deleted because they had poor discrimination power. They were items number 24, 26, 34, 38 and 39 (see appendix 28 page 209).

b. Reliability of the Test

According to Brown (2010:27), “a reliable test is consistent and dependable”. It means that reliability is the degree of the test consistently measures what it should be measured. As Weir (1990:31) suggests that the fundamental criterion against which any language test has to be judged is its reliability. The concern here is with how far can we depend on the result that a test produces or, in other words, could the results be produced consistently.

DI Note: D.I : Discrimination Index

U : the number of correct answer in the upper group

L : the number of correct answer in lower group

n : the total number of students (Heaton, 1988:180)

In this research, the reliability of the items was also analyzed by using Alpha Cronbarch analysis (Flanagan, 2012). The researcher used SPSS 17 for Windows to analyze it. The formula of Alpha Cronbarch is as follows:

(Flanagan, 2012)

After try out of reading test was done, the result was analyzed. It was found that the test was reliable because the result of reliability is 0.922 with thirty nine items of questions, because the items number 38 could not be used due to its reliability. Then, after the invalid items have been deleted, the result of reliability of reading speed test is 0.928 with thirty two items of questions (see appendix 26 page 207). It means that, this questionnaire was reliable and can be categorized as a very good instrument for a classroom test. So, it could be used in this research.

Overall, after doing the try out for reading speed test there were some items of the questions that could be used in this research due to its validity, index difficulty, discrimination power and reliability. There were eight items would be deleted, they were number 3, 8, 12, 24, 26, 34, 38, and 39 (see appendix 29 page 210). So, there would be only 32 items in reading speed test of this research.

Dalam dokumen By Rieka Utami NIM 1204272 - PINGPDF.COM (Halaman 96-102)