• Tidak ada hasil yang ditemukan

Repeated Exposure to Vocabulary Websites and Academic Vocabulary Retention by University Students in Japan

N/A
N/A
Protected

Academic year: 2023

Membagikan "Repeated Exposure to Vocabulary Websites and Academic Vocabulary Retention by University Students in Japan"

Copied!
14
0
0

Teks penuh

(1)

Volume 1

Issue 1 June Article 6

6-1-2022

Repeated Exposure to Vocabulary Websites and Academic Repeated Exposure to Vocabulary Websites and Academic Vocabulary Retention by University Students in Japan

Vocabulary Retention by University Students in Japan

Josephine Mirador

Universiti Teknologi Brunei, Brunei Darussalam, [email protected] Timothy H. Ellsworth

Kansai Gaidai University, Japan, [email protected]

Follow this and additional works at: https://animorepository.dlsu.edu.ph/jeal Recommended Citation

Recommended Citation

Mirador, Josephine and Ellsworth, Timothy H. (2022) "Repeated Exposure to Vocabulary Websites and Academic Vocabulary Retention by University Students in Japan," Journal of English and Applied Linguistics: Vol. 1: Iss. 1, Article 6.

DOI: https://doi.org/10.59588/2961-3094.1005

Available at: https://animorepository.dlsu.edu.ph/jeal/vol1/iss1/6

This Article is brought to you for free and open access by the DLSU Publications at Animo Repository. It has been accepted for inclusion in Journal of English and Applied Linguistics by an authorized editor of Animo Repository.

(2)

Investigations on the use of specific computer- based techniques for vocabulary acquisition and retention have yielded positive results (Abraham, 2007; Anderson & Balajhty, 2007; Groot, 2000; Kim &

Gilman, 2008; Labbo et al., 2007; Silverman & Hines, 2009; among others). Some involved the use of specific techniques, such as how a CALL or online dictionary could impact vocabulary retention (Fageeh, 2014) or

investigated the use of online vocabulary games as a tool for teaching and learning English vocabulary.

Others investigated the use of hyperlinked or multimedia glosses and how they assist in vocabulary learning (Al-Seghayyer, 2001; De Ridder, 2002; Plass et al., 1998). Laufer and Hill (2000, p. 4) claimed that evidence exists that vocabulary uptake is reinforced if learners are exposed to the target vocabulary in

Copyright © 2022 by De La Salle University RESEARCH ARTICLE

Repeated Exposure to Vocabulary Websites and Academic Vocabulary Retention by University Students in Japan

Josephine Mirador1* and Timothy Ellsworth2

1Universiti Teknologi Brunei, Brunei Darussalam

2Kansai Gaidai University, Japan

*[email protected]

Abstract: More recent advances in technology-based language learning saw an increase in the number of specific websites to help students acquire target vocabulary. Although most of the websites are fun and get students “hooked into the learning process,” the question that needs addressing is whether or not the use of websites actually results in increased and long-term vocabulary improvement for learners. This paper reports on our findings from four phases of an investigation that attempted to determine whether two vocabulary websites worked to increase our students’ academic vocabulary. The procedure and findings will be presented alongside each of the four phases this research evolved into. The presentation will discuss the results of a four-phase investigation on the efficacy of vocabulary websites in improving post-practice vocabulary test scores of students in a foreign studies university in Japan. The investigation mainly adopted statistical tools to determine whether the performance of students in consecutive post-practice tests actually improved. The findings revealed moderate gains in scores only in the initial phases for the groups. This research is relevant in that it adds to the body of literature investigating the efficacy of language learning websites in the acquisition and provides empirical data that support the use of websites in class. However, the results also challenge those of previous studies because the gains found were not fully sustained in most of the cases examined.

Keywords: EAP, academic vocabulary, vocabulary websites, Japanese learners of English

(3)

numerous occasions despite the lack of certainty on how many exposures are needed. The efficacy of the use of CALL does merit attention, especially in regard to its perceived role in building a sizable bank of L2 vocabulary.

More recent advances in technology-based language learning saw an increase in the number of specific websites to help students acquire target vocabulary. Although most of the websites are fun and get students “hooked into the learning process,” the question that needs addressing is whether or not the use of the websites actually results in increased and long- term vocabulary improvement for learners. “Language learning websites are often used as supplementary tool in classroom teaching and learning, or as a method to acquire language competence independently” (Pituch

& Lee, 2006, as cited in Shen et al., 2014, p. 159).

Evidence-based research on the efficacy of specific vocabulary websites to improve student acquisition is needed, and studies that provide evidence on whether they actually work or not, surprisingly, are not many.

Oberg (2011) compared vocabulary acquisition and retention using a CALL interface and a picture-cards approach, which revealed no significant differences in the analysis of post-treatment data for two classes of Japanese students. Though there was certainly a higher preference for the CALL method among students, something that we think is expected. However, Averianova (2015) investigated the use of a web-based vocabulary learning tool called “Word Engine” and found that it did increase the vocabulary of Japanese students of English in three cohorts. The results were found to be statistically significant, as revealed in the students’ TOEIC scores which increased by a mean score of 55 points in 2010 among the 2nd year cohort.

The use of web engines was compared with the use of rote learning and memorization, which apparently received less encouraging results. These results are significant in that they offer proof of the efficacy of using a specific web-based vocabulary tool. However, the study does not detail the method specifically involving how long the students were asked to use the tool for practice or how many times (which our study considers as an important variable or criterion to judge the efficacy of such websites).

Yip and Kwan (2006) found similarly encouraging results though their research focused on two websites whose main feature is the use of online games for teaching vocabulary. They found that the experimental

group (which was given the online games as treatment) scored statistically significant higher results in the post-test than the control group (conventional activity- based lessons). The participants were 100 first-year engineering students at a university in Hong Kong. The researchers conclude that “learning with the vocabulary websites which included games are more effective than activity-based learning” (Yip & Kwan, 2006, p.

242). Our study focuses on the use of two websites and actually provides empirical data on how the students scored on vocabulary testing the same words they studied via the websites. Further, our study evolved through four phases and compared the performance of different groups subject to their practice sessions or lack thereof of actual in-class exposure to the use of two vocabulary websites.

Researchers have stressed that “the effectiveness of

… computer-based technology should ... be explored in greater depth to see to what extent it facilitates vocabulary acquisition” (Lafford et al., 2003, p. 149).

The current investigation follows this and the task of exploring whether computer-based technology through consistent use of two specific websites does facilitate acquisition and retention. If the use of such websites does work, teachers can maximize vocabulary support by repeatedly exposing learners to the use of these websites. Corollary to that, teachers can rest in the thought that they are not wasting their classroom time getting students to spend time on technology.

Rather, students can be trained to learn on their own and achieve autonomy in regard to the learning of a specific set of vocabulary.

Egbert et al. (2011, p.11) stressed that “clearly the fact that our students are as different from each other as their learning contexts, therefore CALL educators must think about the questions that they ask about effectiveness before future research can answer with reliability.” Although we agree with this, the reality is at a practical level, and most of us teachers deal with teaching whole multi-level groups, not really with individual students working in highly individualized programs. Indeed, there is a need to ask whether the time invested in the use of such resources translates into measurable amounts of success so we could get sustained support from administrators not only in terms of funds to keep CALL centers working but also in terms of substituted time for actual class-based language input, in exchange for getting our students to spend more time on online vocabulary websites.

(4)

Accountability, hence, becomes an issue here given the investment that universities and language centers spend on software and packages.

“English language learning (ELL) websites serve as platforms for language learners to acquire and practice their language knowledge, particularly for non-native English speakers” (Vogel, 2001, as cited in Shen, Yuan & Ewing 2015, p. 157). Although there is a proliferation of websites claiming to provide learners with as much improvement in vocabulary acquisition as possible, this research investigates the use of two specific websites that we believe are representative of most. We will refer to the two websites here as Website M and V to protect their interests. According to Kukulska-Hulme and Shield (2004), language learning websites should be designed according to the principle of usability (i.e., that they are easy to use and engaging to their users).

Method

Two vocabulary websites were used in this investigation. To protect the interests of the website developers, the websites are anonymously referred to as Website M and Website V. Website M uses memes, or images, to aid student retention of target words. Website M has features such as matching words to definitions, spelling for accuracy, listening, and the ability to practice with cloze-style sentences.

Another important feature is its user-created memes, which are designed to help others better remember or understand a word. They can feature pictures, sample sentences, or explanations. There are existing banks of words uploaded by other users, but teachers can input words, parts of speech, definitions, record their voice, and write their own cloze style sentences. Website V focuses on the use of a series of repetitive prompts to get students to remember the meaning of specific words. The activities consist of match-type questions, provision of hints by eliminating wrong answers, and showing the target word in context (cloze).

Additionally, the two websites are customizable (i.e., teachers can create specific lists of target words students need to learn before a lesson and get students to do practice sessions on such lists).

Aside from determining whether the use of the websites improves vocabulary acquisition and retention, this investigation sought to determine

whether they can be used as a scaffold to a fixed set of strategies for teaching academic vocabulary. The term “strategies” is used here to refer to “sequences of teaching events and teacher actions which make explicit the steps that enable a learner to achieve an outcome” (Baker et al., 1998, p. 4). This article reports on our findings from four phases of an investigation that attempted to determine whether two vocabulary websites worked to increase our students’ academic vocabulary. The procedure and findings are presented alongside each of the four phases this research evolved into.

Phase 1

This phase of the research investigated the relative effectiveness of using the vocabulary website M on the acquisition of vocabulary. The participants were two separate classes of first-year Japanese university students on an academic skills course. For each unit from the required textbook, both groups followed similar successive methods of vocabulary input and practice, including work with word forms, definitions, and sentence writing. However, one group reviewed vocabulary in the computer lab with website M on the 6th session, while the control group practiced using more common in-class practices like gap fill/cloze activities, textbook exercises, or spelling tests.

Participants

Two groups (class A and B) of university students (24 and 25 in number for each group) studying EAP in a foreign studies university in Japan participated in Phase 1 of the research. The majority of the students were first-year students and had spent the previous semester together. Groups A and B were chosen for this study because they both shared the same teacher the previous semester and would therefore be more likely to be starting at the same point. Of the two groups, A was the experimental group, and B was the control group. The groups used the textbook Q:

Skills for Success Reading and Writing 2, specifically Units 6-10. The typical unit took about three weeks to complete, including a test (which included target vocabulary words at the end). The specific steps adopted for vocabulary instruction and practice for each unit went as follows:

1. Students were given a vocabulary log for the unit they were to study. To complete the vocabulary

(5)

log, students must find the words’ meaning, part of speech, word family, and a sample sentence utilizing the word.

2. Students completed the vocabulary log at home, getting the part of speech, the word family, the definition, and a sample sentence.

3. Students practiced vocabulary in the class by matching a definition (shown on the overhead projector) to what they had in their vocabulary logs. This was a group activity where a group member would write the correct answer on the board. Follow-through activity included listing the word families for each group (this could account for any discrepancies in what students found outside of class).

4. Students practiced vocabulary again by looking at cloze-style sentences with missing vocabulary words. This activity is similar to the definition activity, with a similar follow-up.

5. Students wrote eight to ten original sentences using the target vocabulary (they choose the words).

The next class, students share their sentences and write cloze-style sentences on the board for their classmates to solve. If possible, they should not repeat words. The teacher wrote sentences for words that were not used.

6. Here, the experimental group would go to the computer lab and practice using Website M. Group A was sent to the computer lab and asked to use Website M to study the vocabulary words for the chapter. Group B had an in-class activity-based review of vocabulary words (such as practice with a cloze paragraph or talking with the words.) Students were given 25–30 minutes to practice the

words from the particular unit.

7. Students took the unit test. Each unit test consisted of a reading section, which was based on the readings from the unit, as well as a vocabulary test that included a cloze-style paragraph, a section where the students would read a sentence with a vocabulary word.

Initial Results

Table 1 shows the initial results of the investigation into the use of Website M. Scores from the vocabulary portion of five unit tests as well as a final cumulative vocabulary test suggest that students who used the online vocabulary practice were able to show a relatively small increase in their test scores as well as retain the meaning of vocabulary words longer.

The initial findings of this study suggest that the use of the Website M once per lesson unit does not provide an immediate advantage. However, it may have a lasting effect in terms of retention. When isolating and averaging the vocabulary test scores for each unit, students in the experiment group did score, on average, about 1 more point than the control group. Notwithstanding, the final unit test showed a difference of less than a 10th of a percent. An informal poll of students who used the website outside of class revealed that nine students regularly used the course to study at home. With the exception of the unit 7 test, the relative trend of the scores also showed an increase, the largest being the 1.87 difference on the Unit 9 test.

The only clear advantage that could be seen from this treatment was that the experimental group showed a higher average of vocabulary retention at the end of the semester, as indicated on the bonus vocabulary test.

Table 1

Differences in Vocabulary Scores for Experimental (A) and Control Group (B) Group Average

TOEFL Score

Unit 6 (20 pts)Test

Unit 7 (20 pts)Test

Unit 8 (22 pts)Test

Unit 9 (26 pts)Test

Unit 10 (25 pts)Test

Final Vocabulary Quiz

(18 pts)

A 435 18.32 17.36 19.82 22.7 24.43 12.43

B 420 17.5 17.04 18.92 20.83 24.32 9.95

Difference

in scores 15 .82 .32 .9 1.87 .11 2.48

(6)

Students in the experimental group scored on average about 2.5 points higher than the control group. Also worth noting is that three students in the experimental group scored perfect while none in the control group received a perfect score. Although the gains were small, the overall effect of using the program was positive.

Students seemed to enjoy the treatment. Informal conversations with several of them revealed that they found the task engaging. A test of significance was conducted, and an increase in scores from units 9 and 10 was found to be statistically significant (t= –2.5, df=23, p=.019, 2-tailed).

The slight gains found in the vocabulary improvement of students at this phase may be challenged. That said, it should be stated that the results of this study were consistent with other studies on the effectiveness of online vocabulary learning sites (Ashraf et al., 2014; McClean et al., 2013). The results for Phase 1 suggest that the use of online sites has the potential to produce minimal gains in students’

vocabulary acquisition.

Phase 2

For this group (Group C) taught in the second semester of 2016, students were given three rounds of practice using Website V for 30 to 45 minutes. Students were generally given a test based on the vocabulary learned at the next class meeting after each practice session. The results are plotted in a line graph (see Figure 1) to show how student percentage scores

changed for 20 students from the group. The horizontal axis corresponds to the number of students, whereas the vertical axis refers to how well they scored (in percentage). The lines refer to the three tests conducted (series 1 being the first test conducted).

We expected the last test would be higher compared to the two previous lines (or tests), given the consistent exposure to the website. Surprisingly, however, a quick glance at the trajectory of the lines showed the following results:

1. Percentage scores during the first test were highest for almost half the members of the group

2. Percentage scores for the second test went well for only a handful of the members of the group.

3. Percentage scores for the third test were lower overall (compared to the two previous tests) but were closer to each other and less jagged compared to the two previous tests. This indicates that student performance as a group was getting closer (i.e., that students found the difficulty of target items more or less collectively and at the same level).

One factor that may account for such results can include the difficulty level for the words being learned during the second and third tests, compared to the vocabulary given during the first. As the vocabulary was taken from the readings students encountered during the course of the semester, it is expected that the complexity of reading materials (on which the

Figure 1. Student Performance in Three Post-practice Session Vocabulary Tests

(7)

words were based) would also increase as the course progressed. That said, the use of the website for practice sessions before the tests were conducted could also determine the extent to which they worked for students to retain vocabulary, albeit on an immediate basis.

Phase 3

For phase 3 of the research, Website V was used.

Three groups of students were investigated regarding the effect of using practices on the website on their vocabulary test performance on four tests throughout the semester. The groups are all second-year students, and they will be referred to here as Groups D, E, and F. All three groups are believed to be similar in terms of proficiency and took the same course in the same semester. The groups had 25 to 26 students in each class. Essentially, what was done in this phase of the research was to provide close to the actual one-hour intervention of in-class online practice on the target vocabulary on two occasions: Practice 1 (April) and Practice 3 (June). For Practices 2 (May) and 4 (July),

students were told to look up the website off-class and practice on their own. We assume that having an actual monitored in-class practice session on the website would translate into a higher score compared to just assigning the task as part of their homework or assignment (without any monitored practice session).

The vocabulary tests given can be described as follows: Test 1 (38 words) is a matching type, Test 2 (63 words) is also another match type, Test 3 (41 words) is a gap-fill, and Test 4 (27 words) targeted not just the meaning but use of the correct form of a word. The use of a paired observations test revealed the following results for groups overall and specific groups.

Results

Overall, the results for the three groups seem to suggest that the use of the website contributed to moderate growth in their vocabulary scores and their performance in the four tests. Table 2 provides a summary of the performance of Groups D, E, and F after practice sessions were given.

Table 2

Summary Table for Performance of Three Groups in Vocabulary Tests After In-class Online Practice Sessions

Group D E F Overall

Pair 1 April 64.040 64.516 63.095 63.874

May 30.080 26.963 34.395 30.563

Correlation 0.487 0.678 0.507 0.534

t-value 10.265 11.989 7.462 16.510

p-value 0.000 0.000 0.000 0.000

Pair 2 June 55.880 55.278 60.109 57.145

July 44.400 48.167 65.677 52.645

Correlation 0.228 0.481 0.462 0.393

t-value 2.537 1.373 1.457 1.654

p-value 0.018 0.188 0.160 0.103

Pair 3 May 30.080 26.963 34.395 30.563

June 55.880 54.737 60.733 57.114

Correlation 0.419 0.522 0.422 0.465

t-value 7.024 6.371 6.407 11.616

p-value 0.000 0.000 0.000 0.000

(8)

Below is a discussion of the specific results of online practice sessions for the three groups.

Overall Results for Groups D, E, and F

 A paired samples correlations test showed that there is a moderate positive relationship between scores in April (with in-class online practice) and May (assigned) [r=0.534, n=65]. Scores in April are significantly higher than scores in May (p < 0.05, one-tailed test).

 There is a moderate positive relationship between scores in June (with in class online practice) and July (assigned) tests [r=0.393, n=65]. Scores in June are better than in July but not significantly different (p > .05).

 There is a moderate positive relationship for scores obtained in May (assigned practice) and June (with in-class online practice) at r=0.465 [ n=65]. Scores in June are significantly higher than scores in May (p < .05).

Table 3 summarizes all groups in terms of the significance level of differences.

The results then provide evidence that when an actual in-class online practice session is provided, a significant difference could be seen in two instances for all groups combined. Below, the performance of each group in four occasions is discussed alongside data from statistical tests adopted to measure their level of significance.

Results for Specific Groups

For Group D, both scores are higher during the vocabulary tests (April and June) after actual in-class practice on the target was given.

 [r=0.487]. Scores in April are significantly higher than scores in May (p < 0.05).

 [r=0.419]. Scores in June are significantly higher than in May (p < .05).

 Scores in June are significantly higher than scores in July (p < .05) overall in terms of the mean.

Table 4 shows the differences for the group when practice was provided in contrast to when practice was not provided. It presented Group D’s performance in tests when they were given online practice and when only given assigned practice (i.e., off class).

Table 3

Level of Significance of the Improvement in Scores Across Four Vocabulary Tests for All Groups (D, E, F)

Pair Correlation p-value N Mean Difference t-value p-value

Apr & May .534 .000 65 33.3108 16.51 .000

Jun & Jul .393 .001 65 4.5 1.65 .103

May & Jun .465 .000 65 -26.5508 -11.62 .000

Table 4

Mean Differences for the Respective Times With or Without Actual Online Practice Paired Samples Statistics

Mean N Std. Dev. Std. Error Mean

Pair 1 April WP 64.0400 25 16.15415 3.23083

May AP 30.0800 25 16.49980 3.29996

Pair 2 June WP 55.8800 25 17.52693 3.50539

July AP 44.4000 25 18.84365 3.76873

Pair 3 May AP 30.0800 25 16.49980 3.29996

June WP 55.8800 25 17.52693 3.50539

WP (With in-class online practice) AP (Assigned practice)

(9)

For Group E, the results in Tables 6 and 7 show that the scores were higher when actual in-class online practices were done by students (April compared to May; June compared to May). In fact, the increase in scores from May (no treatment) to June (assigned treatment) was found to be significant.

 A moderately positive relationship was found for scores in April and May [r=0.678]. Scores in April are significantly higher than scores in May (p < 0.05).

 The same was found for scores in June and July [r=0.481]. However, scores in June and July are not significantly different (p > .05).

 Scores for May and June have a moderate positive relationship [r=0.522]. Scores in June are found to be significantly higher than scores in May (p <

.05).

The results of a paired observations t-test are further given in the following tables.

Table 5

Significance Level Obtained for Group D Results

Pair Correlation p-value N Mean Difference t-value p-value

Apr & May .487 .014 25 33.96 10.265 .000

Jun & Jul .228 .274 25 11.48 2.537 .018

May & Jun .419 .087 25 -25.8 -7.024 .000

Table 6

Results for Group E of a Paired-Observations T-Test

Paired Samples Statistics

Mean N Std. Deviation Std. Error Mean

Pair 1 April WP 64.5158 19 15.73352 3.60952

May AP 26.9632 19 17.92277 4.11177

Pair 2 June WP 55.2778 18 21.10145 4.97366

July AP 48.1667 18 22.01938 5.19002

Pair 3 May AP 26.9632 19 17.92277 4.11177

June WP 54.7368 19 20.64203 4.73561

Table 7

Significance Level of the Paired Observations T-Test for Group F

Pair Correlation p-value N Mean Difference t-value p-value

Apr & May .678 .001 19 37.55 11.989 .000

Jun & Jul .481 .043 19 7.11 1.373 .188

May & Jun .522 .022 19 -27.77 -6.371 .000

(10)

For Group F, as shown in Table 8, the correlation of scores between April and May is moderately positive [r=0.507]. Scores in April when in-class online practice was provided are significantly higher than scores in May when online practice was just assigned (p < 0.05).

 The correlation of scores between June and July is moderately positive [r=0.462]. Scores in July are better than June but not significantly different (p

> .05).

 The correlation of scores between May and June is moderately positive [r=0.422]. Scores in June are significantly higher than scores in May (p < .05).

 A paired observations test showed the following results (Table 8).

Tables 8 and 9 show comparisons for when practice was assigned in class and off class for Group F.

Again, the results in Tables 8 and 9 revealed that the use of the website in April and June when actual online practice sessions were provided before the vocabulary tests showed higher scores. In particular, the increase in scores in June compared to that from May (when website use was just assigned) was consistent with the

results from two other groups.

A subsequent questionnaire given to the three groups—D, E, and F—revealed the following self- reports by students. From a total of 68 respondents, 62 (90%) reported they actually used it off-class, whereas seven (10%) said they did not use it for any off-class practice sessions. It can be noted that despite the reported use, another survey question asking about the frequency of use of the websites revealed that 60% (a little over half of the groups surveyed) said they used the websites more than twice, 24%

used it at least twice, and 11% said they used it once.

When asked whether they found it helpful in learning new vocabulary, 66 (98%) said they thought so, with only three (4%) thought it was not. Surprisingly, despite these responses, when asked whether they would use it again, only 53 (77%)respondents agreed, while 16 (7%) disagreed. A limitation of this survey conducted was that the reasons other students were not interested in using the websites again were not found out. Finally, when asked about their perception of whether the websites improved their vocabulary, rather overwhelmingly, 64 respondents (93%) answered in the affirmative, while five (7%) answered No.

Table 8

Paired Sample Statistics

Mean N Std. Deviation Std. Error Mean

Pair 1 April WP 63.0952 21 17.56888 3.83384

May AP 34.3952 21 17.93122 3.91291

Pair 2 June WP 60.1091 22 16.92228 3.60784

July AP 65.6773 22 17.61190 3.75487

Pair 3 May AP 34.3952 21 17.93122 3.91291

June WP 60.7333 21 17.07865 3.72687

Table 9

Level of Significance in the Differences in Percentage Scores Obtained for Group F

Pair Correlation p-value N Mean Difference t-value p-value

Apr & May .507 .019 21 28.7 7.462 .000

Jun & Jul .462 .030 22 -5.57 -1.457 .160

May & Jun .422 .057 21 -26.34 -6.407 .000

(11)

Phase 4

For phase 4 of the research, the goal was essentially to compare the test performance of two groups: G (experimental) and H (control). Group G was essentially consistently given 45 min to one-hour online practice sessions on Website M before each of the five vocabulary tests consistently throughout the semester. The decision to focus on the use of one website throughout the semester was based on the idea that familiarity with how to use it might be more beneficial for students than having them use another website midway. Also, it would be interesting to compare their scores in the post-practice vocabulary tests in both situations (one website all throughout or two websites throughout the semester) to check whether the use of a specific website might be the controlling variable that impacts the scores.

To prepare students for a gradual uptake of target words to be learned, the number of items targeted for practice was made incremental, starting with 23 for the first test and 54 by the final test administered. Each test had a list of different words (which depended on the next content lesson), and hardly any word was recycled. Subsequently, five tests were given to both groups. The first, second, and third tests were all matching types. The fourth is a cloze type, and the fifth is again a matching type of test. Group H was not given any in-class online practice session; it was only given the usual classroom input and review but without any online practice before all the vocabulary check-ups were given. Results of the vocabulary test revealed the answers to the following questions.

• Was there an overall increase in percentage scores for Group G (which was given in-class online practice) compared to Group H, which did not in all the five tests?

The experimental group had a higher mean (49.19 compared to 42.2 of the group which did not receive the treatment). Group G performed better than H. The difference was shown to be significant at 5% level (t-test= 2.34, df=46, p<0.05). Group G is the control group in this case. Table 10 shows the mean obtained for both control and experimental groups.

The next question we aimed to answer was:

• Was there a consistent and significant increase in the percentage scores of the group given practice sessions treatment (Group G, the experimental group) from the first to the fifth test?

The increase in percentage scores was seen from Test 1 to Test 2 and Test 2 to Test 3. However, using a paired samples test, we note that only the increase from Test 1 to Test 2 was found to be significant at a 5%

level (t= -8.85, df=22, p<.05). Additionally, although there was an increase in the percentage scores from the second to the third test, this was not found to be significant. On the third vocabulary test given after the in-class online practice, two students from the experimental group actually scored perfectly! Table 11 shows student performance for the post-practice test for the experimental group.

For the control group (Table 12), which did not receive treatment (Group H), a paired samples test showed no significant increases in percentage/scores from the first to the second test, as well as the second and the third test, even without exposure to or use of the vocabulary website. There were slight increases, though, from the first to the second and the second to the third test.

Table 10

Comparison of Means for Control and Experimental Groups

Group Statistics

Groups N Mean Std. Deviation Std. Error Mean

Mean percent scores Group G (control) 23 49.1913 11.67974 2.43539

Group H

(experimental) 25 42.2400 8.76622 1.75324

(12)

Table 11

Student Performance in Five Post-Practice Vocabulary Tests (Experimental Group) Paired Samples Test

Paired Differences

t df Sig.

(2-tailed)

Mean Std.

Deviation Std. Error Mean

95% Confidence Interval of the Difference

Lower Upper

Pair 1 pract1 -

pract2 -36.95652 20.00790 4.17194 -45.60859 -28.30446 -8.858 22 .000 Pair 2 pract2 -

pract3 -3.13043 15.38684 3.20838 -9.78420 3.52334 -.976 22 .340

Pair 3 pract3 -

pract4 21.43478 18.38929 3.83443 13.48266 29.38691 5.590 22 .000

Pair 4 pract4 -

pract5 12.73913 26.97679 5.62505 1.07349 24.40477 2.265 22 .034

Table 12

Student Performance in Five Post-Practice Vocabulary Tests (Control Group) Paired Samples Test Paired Differences

t df Sig.

(2-tailed)

Mean Std.

Deviation Std. Error Mean

95% Confidence Interval of the

Difference

Lower Upper

Pair 1 pract1 -

pract2 -4.16000 24.37157 4.87431 -14.22009 5.90009 -.853 24 .402

Pair 2 pract2 -

pract3 -3.52000 22.67730 4.53546 -12.88073 5.84073 -.776 24 .445

Pair 3 pract3 -

pract4 30.88000 23.02774 4.60555 21.37462 40.38538 6.705 24 .000

Pair 4 pract4 -

pract5 2.04000 25.82712 5.16542 -8.62091 12.70091 .395 24 .696

The results for the two groups then show that Group G only performed better in the vocabulary tests conducted after the treatment from the first to the second test. Hence there was no sustained increase in their performance despite repeated and consistent exposure to Website M, especially considering the third to fourth test and onto the final vocabulary test. That said, the percentage scores for both groups increased from beginning to middle (first to third) of the tests conducted.

Conclusion

As in most research, the claims we make here are context-specific. However, at the end of a four- phase procedure, it can be said that the in-class practice sessions on targeted vocabulary through the use of vocabulary websites actually promote moderate increases in the test scores of students who have repeatedly used the websites to learn academic vocabulary. That seems to be the big picture. This

(13)

conclusion is based on the results from Phase 1 (where slight gains were found in the final vocabulary cumulative test), from Phase 3 (where student scores were higher after treatment or intervention was actually conducted in-class compared to the times when treatment was just assigned off class), and from Phase 4 (where the group which was given the intervention consistently before tests scored better in the first and second post-tests).

Although the investigation found a significant difference between control and experimental groups in three out of four phases, one also needs to consider the specific instances when such increases occurred within each phase of the investigation.

One study (Bowles, 2004, p. 550 on the use of L2 glossing) found “no significant benefit found for CALL over traditional pen-and-paper glosses” overall. But we tend to agree with Bowles (2004), who stressed that the findings “should not negate the potential benefits CALL could have on L2 pedagogy and acquisition”

(p. 550). In the case of our investigation, the potential benefits were found to be significant in a few (but not entirely all) of the specific instances for each phase of the investigation. Finally, although we would like to consider the potential impact of other factors on the scores of students (i.e., the type of test, increasing complexity posed by new sets of vocabulary being learned, and lack of wider contextual content support for understanding each vocabulary word during the timing of tests), we believe that the findings in this investigation provide an empirical basis to the value of getting students to work on specific websites as a way to scaffold their learning of targeted vocabulary.

It can also be said that although practice does not make perfect in this investigation, it did result in moderate gains in vocabulary learning!

With an abundance of CALL materials and websites available to assist students in their learning and teachers in their classes, it becomes absolutely important for researchers to track whether indeed such technology significantly contributes to improving learners’ vocabulary competencies. Further studies are recommended to verify the findings we have detailed in this investigation.

Acknowledgments

Our thanks to Lucas Dickerson, who provided us with technical help with the graphs, Cora Regacho for

the statistical treatments adopted for this research, and the KGU Kenkyuhi for the research funds provided to us.

Declaration of Conflict of Interest

There is no conflict of interest for this article.

References

Abraham, L. B. (2007). Second-language reading comprehension and vocabulary learning with multimedia.

Hispania, 90(1), 98–108.

Al-Seghayer, K. (2001). The effect of multimedia annotation modes on L2 vocabulary acquisition: A comparative study. Language Learning and Technology, 5(1), 202–232.

Anderson, R. S., & Balajthy, E. (2007). Technology in literacy education: Exploring a literacy website that works: Readwritethink.org. The Reading Teacher, 61(1), 94–96.

Ashraf, H., Motlagh, F. G., & Salami, M. (2014). The impact of online games on learning English vocabulary by Iranian (low-intermediate) EFL learners. Procedia – Social and Behavioral Sciences, 98, 286–291.

Averianova, I. (2015). Vocabulary acquisition in L2: Does CALL really help? In F. Helm, L. Bradley, M. Guarda,

& S. Thouësny (Eds.), Critical CALL: Proceedings of the 2015 EUROCALL conference, Padova, Italy (pp. 30–35). https://files.eric.ed.gov/fulltext/ED564162.

Baker, S. K., Simmons, D. C., & Kame’enui, E. J. (1998). pdf Vocabulary acquisition: Research bases. In D. C.

Simmons & E. J. Kame’enui (Eds.), What reading research tells us about children with diverse learning needs (pp. 183–218). Erlbaum.

Bromley, K. (2007). Nine things every teacher should know about words and vocabulary instruction. Journal of Adolescent & Adult Literacy, 50(7), 528-537.

doi:10.1598/jaal.50.7.2

De Ridder, I. (2002). Visible or invisible links: Does the highlighting of hyperlinks affect incidental vocabulary learning, text comprehension, and the reading process?

Language Learning & Technology, 6(1), 123–146.

Egbert, J., Akasha, O., Huff, L., & Lee, H. G. (2011). Moving forward: Anecdotes and evidence guiding the next generation of CALL. International Journal of Computer- assisted Language Learning and Teaching, 1(1), 1–15.

Fageeh, A. I. (2014). Effects of using the online dictionary for etymological analysis on vocabulary development in EFL college students. Theory and Practice in Language Studies, 4(5), 883–890. https://doi.org/10.4304/

tpls.4.5.883-890

(14)

Groot, P. J. M. (2000). Computer assisted second language vocabulary acquisition. Language Learning &

Technology, 4(1), 60–81.

Kim, D., & Gilman, D. A. (2008). Effects of text, audio, and graphic aids in multimedia instruction for vocabulary learning. Journal of Educational Technology & Society, 11( 3), 114–126.

Kukulska-Hulme, A., & Shield, L. (2004) Usability and pedagogical design: Are language learning websites special? In L. Cantoni & C. McLoughlin (Eds.), Proceedings of ED-MEDIA 2004--World Conference on Educational Multimedia, Hypermedia

& Telecommunications (pp. 4235-4242). Lugano, Switzerland: Association for the Advancement of Computing in Education (AACE). Retrieved June 8, 2022 from https://www.learntechlib.org/primary/p/11686/.

Labbo, L. D., Love, M. S., & Ryan, T. (2007). Technology in literacy: A vocabulary flood: Making words “sticky”

with computer-response activities. The Reading Teacher, 60(6), 582–588.

Lafford, B. A., Collentine, J. G., & Karp, A. S. (2003).

The acquisition of lexical meaning by second language learners: An analysis of general research trends with evidence from Spanish. In B. A. Lafford & M. R.

Salaberry (Eds.), Spanish second language acquisition:

State of the science (pp. 130–159). Georgetown University Press.

McClean, S., Hogg, N., & Rush, T. W. (2013). Vocabulary learning through an online computerized flashcard site.

The JALT CALL Journal, 9(1), 79–98.

Oberg, A. (2011). Comparison of the effectiveness of a CALL-based, approach and a card-based approach to vocabulary acquisition and retention. CALICO Journal, 29(1), 118–144.

Plass, J. L., Chun, D. M., Mayer, R. E., & Leutner, D. (1998).

Supporting visual and verbal learning preferences in a second-language multimedia learning environment.

Journal of Educational Psychology, 90(1), 25–36.

https://doi.org/10.1037//0022-0663.90.1.25

Read, J. (2004). 7. Research in teaching vocabulary. Annual Review of Applied Linguistics, 24, 146–161. https://doi.

org/10.1017/S0267190504000078

Shen, H., Yuan, Y., & Ewing, R. (2015). English learning websites and digital resources from the perspective of Chinese university EFL practitioners. ReCALL, 27(2), 156-176. doi:10.1017/S0958344014000263

Silverman, R., & Hines, S. (2009). The effects of multimedia-enhanced instruction on the vocabulary of English-language learners and non-English-language learners in pre-kindergarten through second grade.

Journal of Education Psychology, 101(2), 305–314.

Yip, F.W. and Kwan, A.C. (2006) Online vocabulary games as a tool for teaching and learning English vocabulary.

Educational Media International, 43, 233-249. http://

dx.doi.org/10.1080/09523980600641445

Referensi

Dokumen terkait

The analysis of students’ scores in proficiency tests vocabulary, phonology, and reading comprehension confirmed that the correlation value between independent variables the vocabulary