Chapter 3 Research Methodology 24
3.4 Research Instruments 28
This study involved the development of electronic-based English proficiency test (CEFR) and both quantitative data and qualitative data were used with carefully constructed research instruments to secure both types of data. The research instruments which were used to collect the data were as follows.
1) MOODLE Platform
2) Electronic-based English proficiency test (CEFR) 3) Questionnaire
4) Focus group interview
3.4.1 MOODLE Platform
In this study, MOODLE was used as a testing platform. Its assessment, tracking, reporting, and security features were highly considered as the features for electronic-based testing. MOODLE is used to embed quizzes and instant feedback into teaching courses. Its assessment features use numerous multimedia options such as audio and video can support this interaction and provide the immediate feedback which positively gauges the students’ level of understanding. It has different options that allows timing and IP strict access to exams, shuffling of the questions, general and specific feedback to students, automatic grading, and other interesting options. MOODLE as an internet-based program can deliver the tests via desktops, laptops, tablets, and smartphones (see Appendix A for the MOODLE Platform of the Test).
3.4.2 Electronic-based English Proficiency Test (CEFR)
The test was designed to assess the level of English proficiency in reading and listening only. The test items were reviewed by test experts to make sure that they were reliable, valid, and aligned with the standard topics of CEFR required for university students. The test was divided into 2 sections: the Listening Section and the Reading Section. Each section consisted of 5 parts with 30 test items per section which were aimed to measure students’ comprehension of written information and understanding of spoken English. Each section was timed for 40 minutes. Table 3.2 below shows parts of the electronic –based proficiency test.
Table 3.2 Parts of the Electronic-Based English Proficiency Test
Test Contents Total Questions Duration
Section 1: Reading
Part 1: Sentence Completion Part 2: Text Completion Part 3: Short Passages Part 4: Double Passages Part 5: Long Passages
30 items 40 minutes
Section 2: Listening Part 6: Pictures
Part 7: Question and Responses Part 8: Videos
Part 9: Dialogues Part 10: Talks
30 items 40 minutes
Total 60 items 80 minutes (1.10 hr)
3.4.3 Questionnaire
In order to obtain more in-depth data on the usability of electronic-based English proficiency test (CEFR), the researchers administered the post-survey using a set of questionnaires with all 355 students in a sample group (see Appendix B for the questionnaire items).
The questionnaire preliminarily consisted of 32 questions and was divided into five parts as follows:
1) 6 questions on the students’ general information such as age, gender and years of learning English, and user’s prior experiences in electronic-based English proficiency test, (Example: I have taken online quiz (es) before.)
2) 10 questions on System Use, (Example: Browsing among web pages on the electronic-based English proficiency test is easy.)
3) 4 questions on Learning Impact, (Example: Immediate feedback on the electronic- based English proficiency test helps the examinee to reflect on his/her learning.) 4) 6 questions on User’s Opinion on computer-based test (CBT) and paper-based test
(PBT) (Example: The electronic-based English proficiency test better lessens the examinee’s anxiety than a paper-based test does.), and
5) 12 questions on Design of the Test, (Example:The design of the electronic-based English proficiency test is appropriate.)
The questionnaire items above had been validated and tested for reliability before being distributed to a sample group. After being validated by 3 validators with their comments and
suggestions and also tested by the statistics formula, only 23 questions were acceptable and reliable for the actual study and data collection. The details of these processes are to be explained in the later part of the chapter. In this study, the questionnaire responses were collected using the Likert scale (1-5 points) ranging from strongly disagree to strongly agree;
1=Strongly Disagree, 2=Disagree, 3=Neither Agree nor Disagree, 4=Agree, and 5 Strongly Agree respectively. To ensure the accuracy of the responses, the researcher-teacher explained each item to the students in Thai (a mother tongue) while they were rating the questionnaire.
This way helped the researchers to obtain the responses without confusion or any errors in marking against each point on the scale.
The statistical analysis interpretation of the mean score with the reference to the students’ responses towards electronic-based English proficiency test is shown in Table 3.2 below.
Table 3.3 Likert scale range interpretation of mean score with the reference to the students’ responses towards electronic-based English proficiency test
Level of opinion Scores Scale for means Description
Strongly agree 5 4.51 - 5.00 Highest
Agree 4 3.51 - 4.50 High
Neither agree nor disagree 3 2.51 - 3.50 Moderate
Disagree 2 1.51 - 2.50 Low
Strongly disagree 1 1.00 - 1.50 Lowest
(Adapted from Brown, 2010)
3.4.4 Focus group interview
Regarding the second research objective, to assess the usability of the electronic-based English proficiency test for tertiary-level students and to obtain more in-depth data, the researchers conducted a focus group interview asking the students to give responses for 6 questions in a group (see Appendix C for Focus group interview questions). Before being used for the interview sessions, these 6 questions had been validated by the experts and piloted with
other students who were not involved in this data collection process. In order to have authentic responses, both researchers carried out all sessions of the interview asking the participating students to share, exchange and discuss their experience, perception, and attitude on having electronic-based English proficiency test (CEFR).
In order to obtain reliable and viable sources of qualitative data, at the beginning, the researchers planned to conduct the interview with 30 students from the sample group who voluntarily agreed to participate in the interview sessions. Nevertheless, due to the pandemic, all sessions had to be done online via an application for a real-time meeting. Thus, only 27 students showed up during the group discussions since 3 of them had an unexpected Internet system malfunction. Therefore, 4 groups of 6-7 Freshmen students in each group with mixed abilities in English language took part in this focus group interview.
In order to avoid any unnecessary bias and errors, both researchers conducted the focus group interview together in each round. One of the researchers used the students’ mother tongue (Thai) to explain every question and conduct the interview. This helped facilitate the participating students’ understanding of the questions. Altogether 4 rounds of the focus group interview were carried out and all were also videotaped and photographed by the researchers.
(see Appendix D for the focus group interview photos). Afterwards, the students’ responses were transcribed, translated, and analyzed by using thematic analysis to obtain the data which would be qualitative in nature.