Chapter 3: Methods
3.2 The Research Instrument
3.2.1 Reliability of the Questionnaire
Reliability refers to the consistency of the scores obtained, and how consistent they are for each individual from the administration of one instrument to another, and from one set of items to another. Internal consistency and repeated measurements are the two primary conceptualized forms of reliability used in this quantitative research (Fraenkel & Wallen, 2009). In order to achieve this, two pilot studies were conducted on 74 participants from three schools, who were excluded from the research sample.
The first pilot study was conducted to examine internal consistency, while the second pilot study conducted the repeated measurement test (Table 5).
Table 5: Reliability Measures and Results
Reliability measures The method
1. Internal consistency reliability
Alpha coefficient (Cronbach alpha) for the 30 items questionnaire on the first pilot sample in this research = 0.908 (applied on 35 teachers in 2 schools)
Split half reliability expressed by correlation coefficient for the 30 items questionnaire on the first pilot sample in this research = 0.791 (applied on 35 teachers in 2 schools)
2. Repeated measurement
Test-retest expressed by reliability coefficient for the 30 items questionnaire on the second pilot sample in this research = 0.884 (applied on 39 teachers in 1 school)
The pilot study was used to establish a solid foundation and direction in the concepts, relationships, and instruments to be used in the study itself. According to Teddlie & Tashakkori (2009), it would assist in identifying possible issues with data collection protocols, and in setting the stage for the actual study (Teddlie &
Tashakkori, 2009). This would happen through following every procedure precisely as planned to identify unanticipated problems or issues so that the research plan could be modified as a result of the outcomes of the pilot study (Gay et al., 2014). These unanticipated problems include, for example, the format or the ordering of survey questions, establishing a suitable length for the survey, the difficulty of answering questions and randomizing the questions without causing complications. In addition, the pilot study has an important role in determining and improving the reliability of the survey. All appropriate prior communications with UAEU and ADEK were fulfilled, and appropriate authorizations were received to conduct the pilot study and this research.
The researcher administered the survey to 74 respondents in three schools. In these schools, the surveys were administered in group settings, but each teacher completed their own copy individually. It was important that the pilot study respondents did not consider themselves to be a captive audience; therefore, all respondents were given the clear option to excuse themselves from the activity with the researcher’s full positive support. A visual examination of the survey scores indicated that pilot study respondents were using the terms as intended.
The first administration was conducted on 35 school-teachers, from two randomly chosen schools from the study sample, using a questionnaire that was originally comprised of 34 items. Statistical Package for Social Sciences (SPSS) 17.0 was used for scale reliability analysis tests. The original questionnaire contained 34 items to measure commitment with eight items for AC; six items for CC; four items for NC; seven items for CTP; five items for CTS; four items for CTSB and 13 items of demographic variables.
Based on scale analysis, the scale for AC was reduced from eight to seven items. To test these results, a scale analysis of all eight items yielded an alpha reliability coefficient of 0.534 (Appendix 2, Table I and II). This alpha reliability coefficient increased when the item (I think that I could not easily become attached to another school as I am to this one) was removed to be 0.846 (Appendix 2, Table III and IV).
The scale for CC was reduced from six to five items. A scale analysis of all six items yielded an alpha reliability coefficient of 0.678 (Appendix 2, Table V and VI).
This alpha reliability coefficient increased when the item (I am afraid of what might happen if I quit my teaching job without having another one lined up) was removed and yielded an alpha reliability coefficient of 0.717 (Appendix 2, Table VII and VIII).
A reliability test of the internal consistency of the scale for NC was reduced from four to three items. A scale analysis of all four items yielded an alpha reliability coefficient of 0.734 (Appendix 2, Table IX and X). This alpha reliability coefficient increased when the item (I do not believe that a teacher must be loyal to his or her profession) was removed and was measured at 0.795 (Appendix 2, Table XI and XII).
The scale for CTS was reduced from five to four items. A scale analysis of all five items yielded an alpha reliability coefficient of 0.645 (Appendix 2, Table XIII and XIV). This alpha reliability coefficient increased when the item (I cannot wait to see my students every lesson) was removed and was measured at 0.707 (Appendix 2, Table XV and XVI).
Finally, the two scales for CTP and CTSB were also submitted to a reliability test of internal consistency but were not reduced, as the scale analysis of all seven items for CTP yielded an alpha reliability coefficient of 0.807 (Appendix 2, Table XVII and XVIII) and the scale analysis of all four items for CTSB yielded an alpha reliability coefficient of 0.810 (Appendix 2, Table XIX and XX).
In summary, the Cronbach alpha coefficient reliability test provided statistical evidence for reducing the number of items needed to measure commitment of teachers.
This was because four items in total were eliminated from the six sub-scales, yet there was a net increase in the reliability for each of them. It led to a questionnaire that comprised of 30 items, that was applied to the research sample to measure AC, CC, NC, CTP, CTS and CTSB, with Cronbach value of α = 0.908 for the overall survey.
According to Gay et al. (2014) the alpha coefficient should be of over 0.7 (Gay et al., 2014) as it ranges in value from zero to one. Therefore, the higher the degree, the more
reliable the scale is, as it ensures that the survey statements measure the content in a reliable and consistent way (Table 6).
Table 6: Cronbach Alpha Coefficients Before and After Deletion Some Questionnaire Items
Domain
No of items before deletion
Cronbach alpha coefficient before deletion
Deleted item
No of items after deletion
Cronbach alpha coefficient after deletion
AC 8 0.534 I think that I could not easily become
attached to another school as I am to this one.
7 0.846
CC 6 0.678 I am afraid of what might happen if I quit
my teaching job without having another one lined up.
5 0.717
NC 4 0.734 I do believe that a teacher must be loyal to
his or her profession.
3 0.795
CTP 7 0.807 - 7 0.807
CTS 5 0.645 I cannot wait to see my students every
lesson.
4 0.707
CTSB 4 0.810 - 4 0.810
Total 34 - 4 items 30 0.908
Note. AC =Affective Commitment, CC = Continuance Commitment, NC = Normative Commitment, CTP =Commitment to Teaching Profession, CTS = Commitment to Teaching Students, CTSB =Commitment to Teaching Subject
The second main way of calculating internal consistency reliability is the split- half test. It was conducted on the same sample of the first pilot study. Split-half reliability works by randomly splitting the test into two (for example, the even and
uneven items). Then, SPSS 17.0 was used to calculate the respondents’ scores on each
‘half test’ to check whether the two scores are related to one another. They are expected to be strongly related to a correlation coefficient of over 0.8 if they are both measuring the same thing (Muijs, 2004). The correlation coefficient for the split-half test of the 30 items questionnaire in this quantitative research is 0.791.
The second pilot study was conducted to further assess the internal consistency of the questionnaire through repeated measurements. Repeated measurements have to do with the ability to measure the same item at different times where the same instrument should come up with the same answer when it is used with the same respondent. It is also known as the reliability coefficient or correlation coefficient (Muijs, 2004).
According to Fraenkel and Wallen, (2009), one of the best ways to obtain repeated measurements is to use the test-retest method. In this method, differences in motivation, energy, anxiety, different testing situations are examples of factors leading to some variation in test scores when an instrument is administered to the same group more than once and then correlating the two sets of scores. One to two weeks is often recommended as an optimal time frame between the two administrations.
Test-retest reliability is appropriate for determining the reliability of this research questionnaire that measures commitment because test-retest reliability is designed to measure attributes that are relatively stable over time (such as commitment), and that is not affected by repeated measurements. When using this method, the reliability coefficient indicates the degree of stability (i.e. consistency) of participants’ scores over time or the strength of the relationship between the two sets of scores obtained. It needs to be as high as possible, ranging from 0 to 1 and have no
negative values. Values above 0.7 are usually considered to offer reasonable reliability for research purposes.
The test-retest method was done in one Cycle 1 government school in the Al Ain district with mixed-gender teachers. The survey was administered to 20 Arabic- speaking teachers and 19 English-speaking teachers with an elapsed administration interval of two weeks. For this research, the test-retest reliability coefficient using SPSS 17.0 was 0.884. The test-retest sample of 39 teachers was excluded from the actual study sample.