CHAPTER III RESEARCH METHOD RESEARCH METHOD
C. Research Instruments
Table 3.2. Sample
No Name of Shool Total of class
Total of population of second grade
Persentase (%)
Total of Sample as student
1 MAN Kota
Palangka Raya
9 classes
315 14% 45
2 SMAS
Muhammadityah 1 Palangka Raya
5 classes
175 14% 25
3 SMKN 1 Palangka Raya
14 classes
595 14% 70
4 SMAN 1 Palangka Raya
14 classes
595 14% 70
5 MA Muslimat NU Palangka Raya
2 classes
70 14% 10
Total 44
classes
1.750 220
Sandra Lee McKay (2006, p.35) states that there are two types of questions are open ended and close ended questions. For the research the writer used close ended questions allow for more uniformity or responses and are easy to answer, code, and analyze.
One of the most popular formats of close-ended questions is the Likertscale question in which students or teachers are asked to select one of several categories by circling or checking their response. Likert scaling is a bipolar scaling method, measuring either positive or negative response to a statement (Dorney Zoltan, 2010, p.21). Likert scale is a psicometric scale commonly involved in research that employs questionnaires. In terms of the other data characteristics, the writer used the Likert scale, the interval scales was also used for coding the question. Each response was given a number for example strongly disagredd = 1, disagree = 2, uncertain = 3, agree = 4, and strongly agree = 5.
b. Research instrument try out
Try out is very important because from the try out the researcher will know the validation of the questionnaire as instrument. Also try out use to analyze and measure the instrument. Before the questionnaire applied to the real sample, the researcher conducted try out firstly to the different sample. The samples of try out are 10 students from 11 grade of SMAS Muhammadiyah 1 Palangka Raya. The questionnaire consisted of 20 items. In this questionnaire, the student will answer by giving a check mark in the table that has been given coding. If the students choose
strongly disagree the student can give check mark in coding 1, disagree in coding 2, uncertain in coding 3, agreed in coding 4 and strongly agreed in coding 5.
The procedures of the try out are as follows:
a) The writer prepared the instruments.
b) The writer gave the try out to the students.
c) The writer gave score to the students‟ answer.
d) The writer calculated the results of the try out.
e) The writer analyzed the obtained data to know the instrument validity and instrument reliability by using SPSS 24.
Table 3.3. Questionnaire item specification
Indicator Item specification Teacher talk time Teacher talk time 1,2,3
Feature of teacher talk
Fluency 4
Repetition 5, 6
Quality 7
Quantity 8
Giving instruction 9, 10, 11 Teacher question 12, 13
Motivating 14
Raises student awareness
15
1. Research Instruments Validity
According to Ranjit Kumar, in terms of measurement procedures, therefore, validity is the ability of an instrument to measure what it is designed to measure: he assumes based on Smith states, „Validity is defined as the degree to which the researcher has measured what he has set out to measure‟(Kumar, 2011, p. 166).
According to Kerlinger, „The commonest definition of validity is epitomized by the question: Are we measuring what we think we are measuring?‟ Babbie writes, „validity refers to the extent to which an empirical measure adequately reflects the real meaning of the concept under consideration‟ (Kumar, 2011, p. 167).
According to Arikunto (2010, p.170) validity is there are three kinds of validity: content validity (with respect to the content and format of the instrument), construct validity (referring to the extent to which an instrument can measure the concepts of a theory that is the basis for the preparation of the instrument), and empirical validity (with respect to the relationship between score A criterion). The validity of the content and the validity of the construct of this research is conducted by consulting to the advisor, while the empirical validity in this research, the researcher used the Karl Pearson product moment correlation formula to test the validity of the instrument, that is:
Where :
rxy = index number correlation “r” product moment N = number of sample
∑XY = amount of multiplication result between X score and Y score
∑X = amount of all X score
∑Y = amount of all Y score
Criteria to look question give a significant correlation for a total number is if values of r of each question higher from r table its mean valid. The rule is:
Table 3.4. Result of Test Validity
Item Total Corrected Item Total Correlation/r total
r table Criteria
1 0.907932 0,632 Valid
2 0.744433 0,632 Valid
3 0.839973 0,632 Valid
4 0.907932 0,632 Valid
5 -0.031852 0,632 Invalid
6 0.381266 0,632 Invalid
7 0.839973 0,632 Valid
8 0.907932 0,632 Valid
9 0.757296 0,632 Valid
10 0.423041 0,632 Invalid
11 0.757296 0,632 Valid
12 0.907932 0,632 Valid
13 0.943445 0,632 Valid
14 0.818919 0,632 Valid
15 0.744433 0,632 Valid
16 -0.149566 0,632 Invalid
17 0.943445 0,632 Valid
18 0.907932 0,632 Valid
19 0.757296 0,632 Valid
20 0.284841 0,632 Invalid
2. Research Instrument Reliability
Reliability as referring to the consistency of the scores resulted from the assessment (Latief, 2014, p. 213). Consistency is an important indicator of reliability, meaning that if an assessment result is (or the test scores are)
rvalue > rtable = valid rvalue < rtable = not valid
consistent from one assessment to another, then the assessment result has (or the test scores have) high reliability.
To measure the reliability of the instrument the researcher used Alpha formula because of scoring for the instrument. As for the alpha formula as follows:
r11 = Instrument Reliability k = Number of items 1 = Constant numbers
Σσb2 = Number of item item variants σb2 = Number of total variants
Furthermore, the result of the calculation of r11 obtained is interpreted with the guidance table to provide interpretation of the correlation coefficient.
To test the level of significance of the coefficient of reliability, with the following guidelines:
The value of r Interpretation
0,80 ≤ rxx < 1,00 0,60 ≤ rxx < 0,80 0,40 ≤ rxx < 0,60 0,20 ≤ rxx < 0,40 0,00 ≤ rxx < 0,20
very high high pair high low
very low (not valid). (Slameto, 2001:215)
Table 3.5. Case Processing Summary
Case Processing Summary
N %
Cases Valid 10 100.0
Excludeda 0 .0
Total 10 100.0
As it can be seen from table 3.3 that 10 students rated the statement in the questionnaire. All of them were included the reliability analysis.
Table 3.6. Reliability Statistics
Reliability Statistics
Cronbach's
Alpha N of Items
.775 16
Cronbach‟s Alpha value is show in the reliability Statistic table. The value is, 775 suggesting high internal consistency reliability for the scale.