• Tidak ada hasil yang ditemukan

08832323.2010.510153

N/A
N/A
Protected

Academic year: 2017

Membagikan "08832323.2010.510153"

Copied!
12
0
0

Teks penuh

(1)

Full Terms & Conditions of access and use can be found at

http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20

Download by: [Universitas Maritim Raja Ali Haji] Date: 11 January 2016, At: 22:17

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

Asynchronous Knowledge Sharing and

Conversation Interaction Impact on Grade in an

Online Business Course

Kenneth David Strang

To cite this article: Kenneth David Strang (2011) Asynchronous Knowledge Sharing and Conversation Interaction Impact on Grade in an Online Business Course, Journal of Education for Business, 86:4, 223-233, DOI: 10.1080/08832323.2010.510153

To link to this article: http://dx.doi.org/10.1080/08832323.2010.510153

Published online: 21 Apr 2011.

Submit your article to this journal

Article views: 151

View related articles

(2)

ISSN: 0883-2323

DOI: 10.1080/08832323.2010.510153

Asynchronous Knowledge Sharing and Conversation

Interaction Impact on Grade in an Online Business

Course

Kenneth David Strang

APPC Market Research, Sydney, New South Wales, Australia; State University of New York, Plattsburgh, Plattsburgh, New York; and University of Atlanta, Atlanta, Georgia, USA

Student knowledge sharing and conversation theory interactions were coded from asyn-chronous discussion forums to measure the effect of learning-oriented utterances on academic performance. The sample was 3 terms of an online business course (in an accredited MBA pro-gram) at a U.S.-based university. Correlation, stepwise regression, and multiple least squares regression were used to create a statistically significant model with 4 interaction factors that captured 89% of adjusted variance effect on grade. Although factor multicollinearity was ex-cessive, the model supported a hypothesis that more student interaction in all 4 discussion forums predicted a higher grade. Certain types of asynchronous forums presented negative factor coefficients, which implied too much interaction may be counterproductive (cognitive load theory or the law of diminishing returns).

Keywords:academic performance, asynchronous conversation utterances, distance education, e-learning, online business course, knowledge sharing, student interaction

There is a need to increase student interaction during on-line courses, promote critical thinking, and thereby im-prove learning—this is encouraged by regional and interna-tional accreditation bodies (www.aacsb.edu, www.detc.org, www.efmd.org) and considered good online teaching prac-tice (Costin & Hamilton, 2009; Grandzol, 2004). Kolb and Kolb (2005) stressed the need to improve student interac-tion, which they asserted is “in contrast to the ‘transmission’ model on which much current educational practice is based” (p. 198). Other researchers have advocated for more student interaction to improve e-learning (Johnson & Aragon, 2003; Strang, 2010b; Tatsis & Koleza, 2008).

More research is also needed. Online education prac-titioners continue to face credibility challenges. Despite the ‘no significant difference’ literature between online and classroom effectiveness (Bata-Jones & Avery, 2004; Bernard, Abrami, Lou, Borokhovski, et al., 2004; Joint, 2003; McLaren, 2004; Olson & Wisher, 2002; Russell, 2002; Stacey & Rice, 2002; Strang, 2009a; Webb, Gill, & Poe,

Correspondence should be addressed to Kenneth David Strang, State University of New York, Plattsburgh, School of Business and Economics, Redcay Hall, 101 Broad Street, Plattsburgh, NY 12901, USA. E-mail: [email protected]

2005), a meta-analysis of 232 comparative studies found that although there was no average difference in achievement be-tween residential and distance education courses, the results demonstrated an unacceptable variance (Bernard, Abrami, Lou, Borokhovski et al., 2004). More so, a,

Substantial number of [distance education] applications pro-vide better achievement results, [. . .] and have higher

re-tention rates than their classroom counterparts [. . .on] the

other hand, a substantial number of [distance education] ap-plications are far worse than classroom instruction. (Bernard, Abrami, Lou, Borokhovski, et al., 2004, p. 406)

In particular, an online course study found students per-ceived a lack of interactivity between peers and faculty (Glenn, Jones, & Hoyt, 2003).

In contrast other researchers contend self-directed learn-ing is more effective for some adults (Brookfield, 1993; Hiemstra & Brockett, 1994). If this were true for online courses, student interaction would not be needed to improve grade. Although it is acknowledged some students prefer a self-directed learning approach (Ponton, Derrick, & Carr, 2005) or favor an individualistic reflective-process learning style (Strang, 2008, 2009a), it is argued that interaction is needed in formal learning because effective group interaction

(3)

is essential in most workplaces (Chien, 2004; Ellinger, 2004; Kessels & Poell, 2004). Furthermore, online communication and peer collaboration are becoming essential graduate skills to teach for contemporary employment (Barrie & Ginns, 2007; Tsai, Hwang, Tseng, & Hwang, 2008). However, clear empirical proof is needed to show that online student interac-tion improves (or decreases) learning outcome, which argues against (or supports) the self-directed learning hypothesis.

Another research problem is that many studies of on-line courses do not assess the effectiveness of e-learning performance, or they rely solely on student self-report per-ceptions (Bernard, Abrami, Lou, & Borokhovski, 2004). A meta-analysis found most studies lacked systematic ap-proaches to measure the effectiveness of e-learning interac-tion; the authors claimed researchers simply described dy-namics observed online (Tallent-Runnels et al., 2006). They complained that “studies point to student preferences, faculty satisfaction, and student motivation as primary delivery sys-tem determinants [. . .] new research is needed that measures

impact on academic success and thinking skills” (Tallent-Runnels et al., p. 117). Consequently, the impact of online student interaction on learning outcome needs to be exam-ined and documented.

In this study I review the education psychology literature to identify best practices for examining student interaction in online courses and techniques to measure interactions. Student interactions are captured from an intact student sam-ple over several terms of the same online business course at an accredited U.S.-based university. Quantitative statis-tical techniques are used to measure the impact of student interactions on grade.

LITERATURE REVIEW

First, in terms of rationale for this study, a basic tenet of adult learning is that interaction of some sort is needed, ei-ther with peers, the materials, the professor, or the learn-ing environment itself (Schunk, 2004). There are many rel-evant theories in the education psychology literature that can explain e-learning, yet the scope of this study is to ex-amine the asynchronous interaction impact on performance. Learning-focused online student interaction normally takes place in asynchronous discussion forums (Czubaj, 2000; Il-leris, 2003), but it is acknowledged that productive student interaction can occur via synchronous (virtual) classrooms (Strang, 2010b)—note that the university courses in this study utilized only online discussion forums for student in-teractions.

E-Learning Using Knowledge Sharing and Conversational Interaction

The next task is to propose best practices interaction theories that promote e-learning in online courses. Knowledge sharing and conversation theories have been posited to improve

learn-ing through online asynchronous student interaction (Brewer & Brewer, 2010; Kienle, 2009; Mooij, 2009; Wise, Padman-abhana, & Duffy, 2009). The knowledge-sharing concept of socialization–externalization–combination–internalization (SECI) posits that team members learn by sharing tacit and explicit knowledge through dialog interactions (Nonaka & Konno, 1998). Nonaka, Toyama, and Konno (2001) argued that knowledge-sharing interaction dialog is facilitated (not hindered) by online technology. Brewer and Brewer emphasized the importance of knowledge sharing in business and as an e-learning subject.

In the SECI knowledge creation model, critical thinking occurs through knowledge articulation and peer dialog in-teractions (Nonaka & Teece, 2001). Peer inin-teractions allow mental models of personal best practices explicit for sharing (Strang, 2010a). The SECI model has been cited in several studies to demonstrate effective learning-focused online in-teraction (Konidari & Abernot, 2007; Strang, 2010b; Tatsis & Koleza, 2006).

Pask, Kallikourdis, and Scott (1975) and Duncan (1995) developed the conversation theory model to explain how stu-dent learning was influenced by verbal dialog and informa-tion technology. Pask (1975) presented conversainforma-tion theory in a way that applies knowledge sharing, in that students learn relationships among concepts by teaching back. Teach-back occurs when an individual interacts with a peer (using dialog) about what he or she has learned. This is useful for tacit knowledge sharing.

Other researchers, namely Baker, Jensen, and Kolb (2002), leveraged knowledge sharing and conversation the-ory in experiential learning, suggesting a shared meaning can be obtained by students through the “interplay of tacit and explicit dimensions of knowledge” (p. 4). In their ex-tension to experiential learning theory, they claimed tacit knowledge and deep understanding can be effectively learned through conversational dialogue: “we must both hear and lis-ten” (Baker et al., p. 5). In experiential learning, the conver-sational space is opened to the extent that students develop the ability to perform both activities (speaking and listening interactions), using the tension between epistemological dis-course and ontological redis-course to drive the dialogue forward (Baker et al.).

Although conversation theory predates conventional on-line course delivery, the principles have been widely and recently advocated to improve e-learning in a number of em-pirical studies, as summarized by Clark and Mayer (2003). Finally, applying knowledge-sharing and conversation theory during online courses was found to be effective for e-learning (Crow & Smith, 2005; Kosnik, 2001; Strang, 2010c; Tan, 2003).

Measuring Knowledge Sharing and

Conversation Utterances in Online Courses

Given that online student interactions that apply knowledge-sharing and conversation theories should improve e-learning,

(4)

in this subsection I investigate how to this improvement is measured. Wertsch (1998) discussed the principles of knowl-edge sharing and conversation for learning (albeit without the advantage of using online technology), but his point was that multiple ongoing dialog interactions were needed to over-come cultural differences or personal speech inflections. He found a learning interaction “involves at least two voices: the voice of the cultural tool [. . .] and the voice of the agent

producing utterances in a unique speech situation” (Wertsch, p. 99). By his implication, to-student and student-to-professor utterances during an online course would con-tribute to knowledge sharing and learning even if a student was rephrasing earlier dialogue.

A common approach to measure the online learning in-teraction impact on performance is through the use of psy-chometric tests. Rovai, Wighting, Baker, and Grooms (2009) developed a survey instrument to assess “perceived cogni-tive, affeccogni-tive, and psychomotor learning” (p. 11), which was based on Bloom’s popularized Taxonomy for Learn-ingCognitive-Affective-Psychomotor domains (Krathwohl, Bloom, & Masia, 1964). In the present study I quantified in-teraction significance using student self-report measures but performance was not assessed. However, a critical deficiency noted previously in the literature was the lack of objective indicators for e-learning effectiveness beyond student self-reports of satisfaction and perceptions, or merely evaluat-ing academic outcomes (Tallent-Runnels et al., 2006). Thus, based on this advice, it would be necessary to capture objec-tive metrics of knowledge sharing and conversational dialog, along with actual performance related to those factors. There are several relevant studies mentioned subsequently that pro-vide insight about measuring e-learning interactions.

In a study of online conversation theory, Sherry, Billig, and Tavalin (2000) found that students’ interaction with one another and with their professor improved learning outcomes and satisfaction. Wise et al. (2009) found that learning could be improved by applying conversation theory and knowl-edge sharing principles during online courses. Brewer and Brewer (2010) proposed a theoretical model that integrated the cognitive domain of Bloom’s taxonomy with knowledge sharing, and human resource management interaction typ-ically needed in business organizations. A recent study of conversation theory in online MBA courses concluded that “knowledge articulation [dialogue] will allow [students] to improve most of their remaining DQ [asynchronous discus-sion forum] deliverables, moderately improve their essay pa-per, and strongly improve their case study analysis” (Strang, 2010c, p. 105). Unfortunately none of these studies specifi-cally assessed the effect of student online asynchronous in-teractions on their academic performance.

Tatsis and Koleza (2008) measured student interaction impact on performance by analyzing conversation utterances (using social-interpersonal factors such as face-saving ver-sus face-threatening expressions). They emphasized that all actual dialog utterances should be captured (and relevant

ex-pressions coded) because any dialogue can impact learning. In their model, typical speech interactions between the par-ticipants involved illocutionary utterances, which were direct and conventional speech exchanges, as well as perlocution-ary utterances, which were indirect and sometimes unpre-dictable but could still impact learning (Tatsis & Koleza, 2008). Professor-initiated utterances during class are usually directed toward all students so these are considered more useful for generic e-learning (benefits all students). Thus, al-though all online utterances should be measured, they do not necessarily have equal impact on e-learning outcome.

Clark and Mayer (2003) didn’t rely on student self-report opinions in their research either but instead advocated “an evidence-based practice” (p. 2). They pointed out that on-line interaction or collaboration doesn’t automatically im-prove learning results (it has to be properly structured). Furthermore, they claimed too much student interaction (or too much course material dissemination from the professor) could produce cognitive overload, thus negatively impact-ing e-learnimpact-ing (Clark & Mayer). This presents a justifica-tion that it is necessary to capture the specific amount of dialog utterances during e-learning, and in particular, those that contribute toward formal deliverables. In this study, it is posited that the previous can be accomplished by assessing the amount of relevant student–student and student–professor conversation utterances, which take place in asynchronous discussion forum topics designated as formal deliverables (when trivial utterances such as “thank you,” “yes,” and so on are excluded from the analysis).

Research Propositions and Hypothesis

In light of the previous discussion, I posited that the ap-plication of knowledge sharing and conversation theories during online courses should to improve academic perfor-mance. More specifically, higher amounts of online knowl-edge sharing and conversation between students as well as student–professor interactions should result in proportion-ately higher marks, if all other factors are controlled as much as possible.

Because this study concerns an online course that uses asynchronous discussion forums to record formal student–student and student–professor interactions, and given that it is posited that knowledge sharing and conver-sation theory improve e-learning, I hypothesized that quan-tifying learning-oriented utterances would create an indica-tor that can predict academic performance, as conceptually shown in Figure 1. In this study, the formal asynchronous forums were general discussion, research, case study, and project.

Because it has been asserted that learning takes place at the individual level, and given that professor interactions are normally intended to benefit all students, it is logical to mea-sure only the student utterances, to relate student interactions

(5)

FIGURE 1 Hypothetical model of asynchronous interaction impact on performance.

to their grade. The asynchronous interaction indicator should predict performance, as hypothesized subsequently.

Hypothesis 1(H1): higher amounts of asynchronous

knowl-edge sharing and conversation theory interactions during an online course, measured by student initiated learning-oriented utterances, would result in higher student grades (individual level of analysis).

METHOD

This was an ongoing project using action research (Zuber-Skerritt, 1993) to improve the effectiveness of online courses. The present study used mixed methods (Creswell, 2003) to transform qualitative data into quantitative indicators and to test the hypothesis.

Subjects and Study Context

The sample consisted of 53 students that completed the same online business course within an accredited MBA degree pro-gram at a U.S.-based university (intact convenience group). This sample excluded two students that did not complete due to withdrawals (all of the 53 students completed the course and were given a final grade). In terms of demograph-ics, 100% reported being employed full-time and 55% were women.

The entrance requirements for the degree program were: baccalaureate degree with a GPA of at least 2.5 on a 4.0 scale (62.5%), three letters of recommendation indicating the can-didate’s ability to pursue graduate study (at least one from a professor or academic advisor if the student was presently studying or completed a degree within the last three years), acceptable English language skills, and submission of GRE scores. Because 4% of the students reported using English as a second language (or they were not U.S. residents or citi-zens), they had already taken and passed a TOEFL (meeting the minimum 550 threshold). The GRE General Test score means were the following: 660 for verbal reasoning, 670 for quantitative reasoning, and 4 for analytical writing.

These students took the same 12-week course—Applied E-business Management Information Systems (EMIS). There were no repeat students (statistical nonreplacement). This course was offered over three contiguous terms in the same academic program year. No changes were made to any

aspect of the course during the sample frame. The course was taught by the same core faculty teaching team: two in-structors (full professors) and one teaching assistant. Usually one instructor taught the online courses (this study) and the other taught the residential mode, with both collaborating throughout each term to ensure the courses were equal in content, delivery, and assessment. The online courses were delivered using Blackboard for the asynchronous discussion forum components, the assignment submission, and grades.

Six asynchronous discussion forums were set up, with the first two designated for chatting and course materials, respectively. The last four constituted the formal deliver-ables: general discussions, research and analysis, case stud-ies, and project report. All deliverables were in writing (text, graphics, or numbers), to be posted into the designated dis-cussion forum and there were no quizzes or exams. The general discussion amounted to ongoing Socratic and con-versational questions posted by the professor (to stimulate discussion related to the learning objectives), which students were required to answer, and build on each other’s submis-sions throughout the course. The research area contained spe-cific topics the students had to research, cite (in APA style), and compare and contrast with each other’s findings. The case study contained two empirical problems that required a best practices recommendation from students, which was achieved through decomposition of the problems and formu-lation of proposed solutions. The project deliverable was a group effort that required a charter, plan, and completion of a proposed solution for a business problem of their choice (related to the course materials, learning objectives, and ap-proved by the professor).

Statistical Procedures and Measures

Descriptive estimates were first used to allow other re-searchers to assess the sample characteristics, as well as to ensure factors and variables met the assumptions for the sub-sequent statistical procedures. The level of confidence was set to 95% for all tests.

The factors of interest were the knowledge sharing and conversation theory interactions that took place in the asyn-chronous forums designated for the four deliverables. The first three forums (general discussion, research, and case study) contained threaded questions, answers, and follow-up dialog in which contributions were made by all students. The fourth area was structured identically, but was slightly

(6)

TABLE 1

Example of Coding Asynchronous Knowledge Sharing Conversation Interactions

Forum Segment of student interaction utterances Count

General discussion S6: Actually proposed new system is part of decision-making process—I enumerated four types in my previous discussion on this topic: 1. Keep/customize existing system

S6=5 2. Custom-build new system

3. Buy and implement new system covering all requirements 4. Find SaaS system to deliver

Research analysis S5: The article lists the CRM disadvantages while providing more capabilities and reliability, Sugar Suite loads slower than vTiger CRM and is not so easy to use. Problems may also arise if a user doesn’t lock the Installation after finishing it. Contrary to vTiger CRM, some of its add-ons are not free for installing and should be ordered additionally. Another disadvantage of SugarCRM is the very resource-consuming upgrade process. A SugarCRM upgrade can rarely be completed successfully on a shared server because the upgrade times out.

S5=7

S12: pasting link for we can reference this later www.siteground.com/sugarcrm vtiger.htm S12=1 Case study S8: I can give some context on this one—the idea is that the current ERP is built to handle enterprise sales with long

sales cycles but the business is moving to a model where many smaller sales (potentially automated) will occur that requires a completely transaction-oriented model for reviewing leads and focusing a tight integration between sales and support (since customers will be more self-service)

S8=5

S9: So you would be using the “ERP method” to convince the VP to buy into the system? S9=1 Project S1: Let me put it this way—I know that to do this assignment for work I would do it as a PowerPoint—what I’m

trying to understand is what constraints we have in delivering this project for the class—from the email, I sense we need “references” and we need it to be written in a formal language I assume APA.

S1=3

S2: I would also tend towards a “shorter” and critical issues-oriented discussion, with lists and comparisons between options rather than a structured set of sections per se—so I simply would like to ensure that we are meeting expectations for the class because I’m definitely sensing that I cannot approach this project the way I would for a work situation.

S2=2

Note.S=student.

different in that only the students in the same team posted comments in their own area (thus it was demarcated by group). Project teams were expected to carry out threaded discussions to document all aspects of their project. The project report was also counted.

The interactions were quantified by counting the learning-oriented utterance phrases made by students and tallying these counts for each student, within each of the four asyn-chronous deliverable forums. The scope for a learning-oriented phrase was a sentence or fragment that made a comment, question, citation, or reflection about any topic relevant to the course materials, learning objectives, or sub-ject being discussed (social chatter and short trivial replies were excluded). I made the decision to allocate a phrase as a learning-oriented utterance, which was later rereviewed by a colleague (the other professor teaching the residential mode class). After a collegiate debate we arrived at a consensus on whether the phrase was learning-oriented (only 1% of the total phrases were debated in this way, and changed from the original coding).

The project report deliverable was coded slightly dif-ferently because some of the utterances were in the form of the written group report posted into the asynchronous project forum (and not strictly interactions per se). This fo-rum contained both individual student interactions (discus-sions similar to the other three forums) as well as the group-authored project report. Professor messages were excluded from counting, but student question and responses to the

pro-fessor were included if they met the previous criteria used for the other three forums (relevant to course and objectives). Because the report was considered an important indicator of performance, the sentences in this report were treated as ut-terances, in a manner similar to how the other three forums were coded. For the project report, a count was made of all relevant sentences (which were treated as utterance phrases), and that same count of utterances was allocated to each team member, as all students in the group were expected and re-quired to jointly author the report. Table 1 illustrates example interaction coding results from the study data.

RESULTS

First, descriptive estimates were calculated to display the sample characteristics. Then, the hypothesis tests were con-ducted and evaluated using correlation and regression.

Exploratory Data Analysis

Table 2 lists the important descriptive estimates of the sam-ple: means and standard deviations along with Kurtosis and skew to indicate distribution normality (kurtosis was the only result, below−2, that indicated an unusually flat peak, so it was not surprising to see a low skewness, neither of which was any cause for concern with this sample). A key indicator was the overall final grade mean of 75% (SD=0.10%) for

(7)

TABLE 2

Descriptive Statistics of Online Course Sample (n=53) Sample variance 4568.78 4970.77 2956.80 678.55 0.01 Kurtosis −1.35 −2.04 −0.64 −0.65 −0.92 Skewness 0.09 0.02 −1.07 1.02 0.40

Minimum 87 131 306 97 0.59

Maximum 272 287 453 170 0.96

all students (N =53), which was not significantly different than previous academic year performance (for all terms) in this course,t(151)=1.879,p =.381. From this it may be assumed that course difficulty level was consistent.

With respect to the online asynchronous utterances, the case study forum had the most (M =402, SD =54), and in fact twice it had as many interactions as the next higher category of research dialog (M =210,SD=71). Even the student with the minimum interactions in the case study dis-cussion was higher than maximum of all other students in all other online asynchronous forums. This immediately sug-gests there may be an effect of case study interactions on the academic outcome (simply due to the magnitude and lower relative variance)—as the hypothesis was that more inter-actions would result in higher grades, this is one factor to carefully analyze. General discussion was slightly lower (M =176,SD=68) and project dialog was lower by a similar amount (M=123,SD=26). Based on experience with this course over the last few years, there is a normative trend of more interaction with the case study and project deliver-ables, followed by research dialog, but usually the general discussion is the forum with the lowest student interactions.

Preliminary Hypothesis Testing Analysis

The Pearson Product Moment correlations of all factors and the independent variable (final grade performance) are listed in Table 3 (any coefficient beyond±0.3 was generally consid-ered significant). Obviously there was significant correlation between certain discussion forums (research and case study was 0.58, whereas research and general was 0.52 and research and projects was 0.68). This doesn’t necessarily mean par-ticular students were talkative in the asynchronous forums, but instead this is likely a precursor of an underlying learn-ing style dimension of the students, whereby high levels of interaction in one deliverable forum would be expected from the same students in other forums. The first reflection in such

TABLE 3

Correlations of Interaction Utterances and Performance in Sample

Asynchronous interactions (phrase utterances)

Pearson correlation General Research Case study Project

Research discussion 0.514

Case study discussion −0.250 0.583

Project discussion −0.029 0.680 0.391

Final grade −0.326 0.054 0.397 0.179

a situation might be to remove one of these from the model, but on the other hand, each forum served a different learning purpose.

The correlations with the final grade were of most interest for testing the hypothesis that high interactions would relate to high performance (at the individual student level of anal-ysis). Case study interaction was positively correlated with grade (0.40), which was earlier suspected due the magnitude of the utterances as compared with the other asynchronous forums. This suggests that more online knowledge sharing and conversation among students is moderately and posi-tively related to their final grade.

The surprising result was the moderate negative correla-tion between general discussion forum interaccorrela-tions and final grade (−0.33). Drawing on prior experience in this course, the deduction about this is that because the general forum contained generic discussion (including questions), those stu-dents that read and understood the theories well would not likely have had as many inquiries or dialog (in this topic), as compared with those students that were having difficulty. Fur-thermore, general discussion was a topic that attracted panic questions when students had skipped through their materi-als quickly (superficial learning), and then needed help with several related subjects rather than with a specific deliver-able (thus the choice of general forum). Lower interactions in the general discussion related to students having a strong understanding of all materials.

The lower correlation of project discussion with grade (0.18) can also be logically explained from teacher experi-ence. Projects were team deliverables; therefore, more time was spent upfront creating a plan and charter that clearly outlined the roles and responsibilities of each member. It is assumed that when team member duties were clearly laid out (taking advantage of the strengths and weaknesses of each member), the actual tasks required less discussion later on to create the report. In this line of thinking, when teams did not have a solid plan, they required more ongoing discussion, which is less efficient (and this also competed for time with other concurrent work), and therefore more interaction in the project discussion forum meant a lower performing team unit and a corresponding lower final grade. Nonetheless, a certain level of interaction was expected which positively correlated with the final grade. Interaction in the research discussion

(8)

TABLE 4

Stepwise Regression Models of Interactions on Performance

Number R2 Adj.R2 Adj.R2 C-p General Research Case Project

1 0.107 0.089 389.9 S

1 0.158 0.141 0.052 364.8 S

3 0.215 0.167 0.026 340.8 S S S

2 0.205 0.174 0.007 343.3 S S

2 0.213 0.181 0.007 339.7 S S

3 0.245 0.198 0.017 326.1 S S S

4 0.902 0.894 0.696 5.0 S S S S

Note.Values sorted by adjustedR2. “S” indicates factor entered into step-wise regression model.

did not significantly correlate with final grade, but as pointed out previously, research instead correlated with general and case study interactions (separately not simultaneously). This is reasonable, as more peer dialog in either topic (case study or general discussion) could trigger the need to undertake and discuss additional research.

Hypothesis Testing Results

Given that correlations between the interaction factors (par-ticularly research with case study as well as with general discussion), along with the low correlation of certain fac-tors with grade, the next step was to confirm which of these factors should be included in the model to test their effect on grade. To accomplish this, stepwise multiple regression was utilized, by first entering a factor into the model, and then another, trying all combinations, while recording their incremental ability to capture the variance on performance.

Table 4 lists the stepwise regression model combinations (in each row), with the better models toward the bottom. Each row shows the number of variables entered into the model, theR2, adjustedR2, and delta (incremental amount of adjustedR2captured from the previous model, with negatives meaning less), followed by the C-p statistic. AnSin the right columns of Table 4 indicates which factor was selected in the stepwise regression (each row in Table 4 constitutes a different model). The first model used one factor (general discussion interactions) and captured 8.9% of the variance impact on final grade. The third model used three interaction factors (general discussion, case study, and project report), capturing 16.7% of the variance effect on grade (which was 0.026 more than the second model that used only case study as the factor).

A best practices method to select significant predictors in a multiple regression model is to find the row where k−1 number of factors is close to C-p statistic (Levine, Stephan, Krehbiel, & Berenson, 2005). Adjusted R2is an important estimate because it reflects the number of variables needed in the model. Applying this technique, the best combination of factors was the last row with all included in the model (capturing 89.4% of variance).

Another method of identifying the best factors in a regres-sion model is to try all combinations of factors (using the best subsets procedure in the statistical software), sort the result-ing matrix by adjustedR2(ascending sequence), then use the

C-p as a cutoff for any ties, following the logic of Levine (2005), but instead compare the delta (relative change) in adjustedR2from model to model (Strang, 2009b). Using this technique, it is clear that the last row in Table 4 captures a very large amount of incremental factor variance (delta adjusted

R2) effect on grade (.69) when all four factors were included, which corroborates the previous technique. The reason this cautionary step was taken (assessing delta adjustedR2) was

due to the high correlation between general and other fac-tors and the low correlation of certain facfac-tors with grade. For example, when general was excluded from the model (second last row in Table 4), the adjusted R2 was 19.8%,

which was 0.017 more relative variance captured using only three factors—which would be a more parsimonious statisti-cal model (Keppel & Wickens, 2004). However, it is obvious that the four-factor interaction model is the best in terms of capturing combined (89%) and incremental variance (70%) on grade. Thus, the conclusion is all four factors (general, research, case study and project) should be in the model, as together they capture an adjustedR2of 89.4% of variance on final grade. This result (adjustedR2of 89.4%) is very good; it is considered a large effect (Cohen, 1992).

Now with the four significant interaction factors selected from the stepwise multiple regression, the next step was to calculate detailed estimates of the effects on academic perfor-mance (final grade). Least squares multiple regression was used to test this complete regression model for significant effect size (omnibus model test) and then to estimate the co-efficients,ttests,pvalues, and other statistical benchmarks. The results are presented in Table 5 (detailed estimates first, then an omnibus test).

The first critical result in Table 5 is the omnibus test, which is the analysis of variance (ANOVA) of the four factors in the regression model to determine the grade effect significance. In this situation the estimate was good (R2=.902; adj.R2=

.894),F(4, 48)=110.81,P=.000. This made it permissible to interpret the detailed coefficient estimates. The omnibus

(9)

TABLE 5

Regression of Asynchronous Forum Interactions on Course Performance

Predictor Coefficient SD t p VIF Hypothesis

Constant −9.2900 0.540 −17.30 .000

Case study 0.0203 0.001 18.86 .000 162 Supported

General 0.0184 0.001 17.98 .000 228 Supported

Research −0.0269 0.002 18.38 .000 508 Supported

Projects 0.0351 0.001 18.38 .000 118 Supported

Note.Model (omnibus values):F(4, 48)=110.81,p=.00;R2=.902; Adj.R2=.894. Durbin-Watson=1.71.

test was a method triangulation for the stepwise regression as both results were equal.

The interaction factors are listed in Table 5 as predictors (the constant is the slope), followed by the coefficients, stan-dard deviations,t-test estimates, andpvalues. The key esti-mates to examine are thettests andpvalues. Allttests were considered significant following the rule of≥ ±2 (J¨oreskog, S¨orbom, & Wallentin, 2006) and clearly all p values were zero (a good result which supported the hypothesis). From this, the hypothesis can be accepted in which each of the four factors (used together in the model) is statistically signifi-cant, capturing a large variance effect on final grade. On the other hand, there are a few caveats to discuss subsequently, as identified by the other indicators.

Most importantly, the Variance Inflation Factor (VIF) was calculated to detect undesirable independent factor statistical interaction by estimating how much coefficient variance was driven by multicollinearity. The most desired VIF is 1, which means a predictor is orthogonal to the others in the matrix (no significant correlation), whereas a VIF higher than 10 indicates multicollinearity (Tamhane & Dunlop, 2000). Some statisticians recommend removing factors with a VIF greater than 5 (Snee, 1973); others suggest removing any greater than 3 (Carlson, Thorne, & Krehbiel, 2004). All factors had very large VIFs, ranging from 111 to 508. Based on the statistical literature, the model may be unreliable, as the independent factors are likely confounding one another when predicting grade.

However, it is argued here (in educational psychology) that these asynchronous discussions do overlap in the sense that knowledge sharing conversation utterances in one partic-ular forum may well improve student e-learning across sev-eral forums. For example, the comments and findings from one topic could help with the learning and report writing in another forum (e.g., research likely helps all other deliver-ables). In fact, this was statistically implied when the high Pearson correlation was discovered between certain factors (research with case study, project, and general discussions). Furthermore, interaction in one forum could reduce the need to interact in another forum, such as was likely the situa-tion with projects, whereby the lower interacsitua-tions (and the logical sequencing of project toward the end of the course) could signify that the learning curve had reached it zenith

at that point, and thus many questions had already been an-swered and much of the research was available to complete the project. Thus, in this study, the decision was made to retain all four factors despite the high VIF estimates.

Because the interaction factors (utterances in the four asynchronous discussion forums) were posted periodically over time, autoregression potential was checked on the de-pendent variable (final grade) using the Durbin-Watson (DW) d estimate. The acceptable benchmark for the DW d is close to or less than or equal to 2 (Levine et al., 2005). In this sample the DW d estimate of 1.71 was acceptable, meaning there was no autoregression detected.

Finally, given the decision to accept this statistically sig-nificant four-factor model to estimate the effect of asyn-chronous interaction in the forums on final grade, the de-tails of the regression can be evaluated. The regression equa-tion for the model was the following: final grade= −9.29

+0.0203∗ Case study +0.0184General 0.0269

Re-search+0.0351∗Projects.

The coefficients in previous equation can now be math-ematically interpreted. A positive coefficient suggests that higher values of the factor would result in higher values of the dependent variable (in this case the final grade). In simi-lar fashion, negative coefficients in a regression model mean that they decrease the dependent variable. The only nega-tive coefficient factor multiplier in the model is for research, which signifies that lower amounts of utterances in the re-search forum and higher amounts of utterances in the other three forums generally result in higher grades.

In this statistically significant regression model, the coef-ficients can be used as predictors, to forecast grade, for given levels of interactions in each of the asynchronous forums (but this induction applies to this sample frame). Actually, the range of the factors shown in Table 2 should be main-tained for the multiple regression formula to work correctly in this model. Thus, the minimum for research forum in-teractions is 131 and the maximum is 287. In keeping with this logic, if it were desired to model a student with an av-erage amount of knowledge sharing and conversation inter-action (by entering the mean utterances observed for each factor listed in Table 2), the predicted grade is the following:

−9.29 +(0.0203∗402) +(0.0184176) +(0.0269210) + (0.0351∗123) = 0.7773. This corroborates the model

(10)

since this is equal to the grade mean (78%) reported in Table 2.

On the other hand, the model is not totally reliable for pre-diction, due to the excessive intercorrelation detected through different (triangulated) statistical methods. First, undesired factor intercorrelation was foreshadowed by the moderate positive Pearson correlation found between research and the other three factors of case study, projects, and general dis-cussion interaction. Further to this, general disdis-cussion had a negative correlation with grade, whereas project and re-search factors had only a small positive correlation with grade—leaving case study with the only moderate and posi-tive correlation to grade (this is important to note as a better correlation model would have presented at least weak posi-tive correlation between each factor and the dependent vari-able grade). Another warning sign of potential confounding factors was the huge incremental factor variance increase in the stepwise multiple regression (delta adjustedr2in the last row of Table 4 was .69)—such a large increase when only one more factor was added (to form the final four-factor so-lution) indicates hidden interaction was likely captured in the model. Finally, the excessive VIFs noted during the least squares multiple regression (Table 5) confirmed these factors were confounding one another.

In light of the previous results, an experiment was con-ducted to test the stability of this model. Monte Carlo sim-ulation was utilized, entering random combinations of ut-terances for each factor, but within thresholds of the mini-mum and maximini-mum values for each factor from the descrip-tive statistics of Table 2. The result of this simulation pro-duced several instances of invalid outcomes (some simulated grades were negative or higher than 100%). Therefore, this model should still be considered conceptual: more research is needed.

DISCUSSION

Conclusions

Theoretically and empirically this study accomplished the objective—a statistically significant model was developed, but high factor correlations constrain the implications.

Overall, the study indicated that applied knowledge-sharing and conversation theory, as represented by higher levels of student asynchronous discussion forum interac-tions, improved academic performance in this online busi-ness course. Certainly it is possible that other theories could have caused the higher interactions and/or improved grade (e.g., better technology or higher student motivation), and thus more research is necessary.

At a more detailed level, in this study I hypothesized that if students applied higher amounts of knowledge-sharing and conversation theory in asynchronous discussion forums, e-learning would be increased, resulting in a higher final grade.

The sample included three terms of the same online business course (N =53), using the same syllabus, materials, con-text, professor, and technology. The interaction counts were calculated by coding learning-oriented utterances posted by students in each of these four official asynchronous discus-sion forums. Least squares regresdiscus-sion produced a statistically significant four-factor model that supported that hypothesis (adjustedR2 =.894, p =.000),F(4, 48)=110.81,R2 =

.902. In this model, more student interactions in the project, case study, research, and general discussion forums led to a higher final grade.

Statistically, the interaction factors as a whole had a sig-nificant effect in capturing the variance on final grade (more interactions produced higher grades). However, the exact amount of best practices interactions for a particular asyn-chronous discussion forum cannot be accurately predicted due to the high multicollinearity detected between the four factors. At best it can be posited that higher interactions in the case study forum, moderate interactions in the discussion and project forums, and lower interactions in the research forum, all in a relative sense, increase final grade (and vice versa). It is clear from the model that very low interactions in one or all four asynchronous discussion forums would predict a lower grade, yet a large number of interactions do not necessarily translate into a passing grade, either. This is likely due to cog-nitive overload and the law of diminishing returns—students must rationalize their knowledge sharing and conversation interaction effort expended toward e-learning.

As implied previously, there seemed to be different best practices levels of student knowledge sharing and conversa-tion interacconversa-tions for particular asynchronous discussion fo-rums. Case study had the highest amount of interactions and the largest positive correlation with grade (+0.40). General discussion and research interactions did not correlate strongly with grade, but (along with case study) they contributed to capturing variance on grade in the multiple regression model. A lower amount of project discussion forum interactions was rationalized to be due to the group mode of this fourth deliv-erable (project report), whereby some team interaction was needed, to form the charter and plan, but thereafter more of-fline effort was put into writing the coauthored report (and not captured within the forum). The procedure used to codify the project forum interactions treated the report sentences as ut-terance phrases, such that each student in the team received equal credit for all report interactions, in addition to their individual postings in the project discussion forum. This ac-counted for a lower utterance count in the project forum, and thus had a moderate impact on grade in terms of interaction. Project interactions had the highest coefficient in the mul-tiple regression model (+0.0351), suggesting moderate but high-quality group interactions in the project asynchronous discussion (accompanied by an effective succinct coauthored project report), increased final grade. The other three forums (case study, research and general discussion) were differ-ent from the project forum in the sense that all interactions

(11)

in the former were individual learning-oriented knowledge-sharing conversations (thus, most utterances were better). Not surprisingly, higher interaction counts in the case study, re-search, and general discussion asynchronous forums tended to result in higher grades (except research was a less reliable indicator due to high correlation with other factors). Nonethe-less, all four factors were statistically significant predictors in the model.

Limitations and Future Research

First, the sample size was small (53 students), and it was an intact convenience group (not pure random). Although descriptive statistics of prior ability were provided (degree prerequisites and pretest means), the statistical limitations of a small sample cannot be explained. Second, the coding of the knowledge-sharing and conversation theory interactions (from the online asynchronous discussion forums) was a sub-jective procedure, which created the four factors that were pivotal to the regression model.

To overcome these limitations, this study should be repli-cated to larger samples, and multiple researchers should per-form the coding of the utterances into interaction counts. Then Item Response Theory (or a similar technique) should be used to verify the coding was consistent between re-searchers (and this could be repeated with this study to retrospectively-validate this exercise). The sample should be extended to more online business courses in this program, and gradually to other programs and universities.

Finally the issue of excessive factor intercorrelation could be investigated using nonlinear distributions (scaling inter-actions as exponential, quadratic and so on), or by applying other nonlinear analytical techniques. Additionally, grade could be decomposed into scores for each specific asyn-chronous discussion forum, so as to partition the interaction variance (in each forum) directly to the part of final grade that was most attributable to the students’ knowledge sharing and conversation utterances. In this way, some factors (discussion forum interactions) may prove to be insignificant, whereas others could be more predictive; this would also isolate high-confounding VIF. The earlier suggestion of surveying the participants for their perceptions is another viable technique that could ascertain the student’s intentions for knowledge sharing and conversation theory toward learning-oriented in-teractions in the asynchronous forums.

REFERENCES

Baker, A. C., Jensen, P. J., & Kolb, D. A. (2002).Conversational learning. Westport, CT: Quorum Books.

Barrie, S. C., & Ginns, P. (2007). The linking of institutional performance indicators to improvements in teaching in classrooms.Quality in Higher Education,13, 275–286.

Bata-Jones, B., & Avery, M. D. (2004). Teaching pharmacology to graduate nursing students: Evaluation and comparison of web-based and face-to-face methods.Journal of Nursing Education; Thorofare,43, 185–189. Bernard, R. M., Abrami, P. C., Lou, Y., & Borokhovski, E. (2004). A

method-ological morass? How we can improve quantitative research in distance education.Distance Education,25, 175–198.

Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., & Wozney, L. (2004). How does distance education compare to classroom instruction? A meta-analysis of the empirical literature.Review of Edu-cational Research,74, 379–439.

Brewer, P. D., & Brewer, K. L. (2010). Knowledge management, human resource management, and higher education: A theoretical model.Journal of Education for Business,85, 330–335.

Brookfield, S. D. (1993). Self-directed learning, political clarity, and the crit-ical practice of adult education.Adult Education Quarterly,43, 227–242. Carlson, W. L., Thorne, B., & Krehbiel, T. C. (2004).Statistical business

and economics(5th ed.). Upper Saddle River, NJ: Prentice-Hall. Chien, M.-H. (2004). The relationship between self-directed learning

readi-ness and organizational effectivereadi-ness.Journal of American Academy of Business,4, 285–288.

Clark, R. C., & Mayer, R. E. (2003).E-learning and the science of in-struction: Proven guidelines for consumers and designers of multimedia learning. San Francisco, CA: Wiley.

Cohen, J. (1992). Statistics a power primer.Psychology Bulletin, 112, 115–159.

Costin, H., & Hamilton, D. (2009). Quality in business education as mea-sured by accreditation and ranking systems.International Journal of Man-agement in Education,3, 249–269.

Creswell, J. W. (2003).Research design: Qualitative, quantitative, and

mixed methods approaches(2nd ed.). New York, NY: Sage.

Crow, J., & Smith, L. (2005). Co-teaching in higher education: Reflective conversation on shared experience as continued professional development for lecturers and health and social care students.Reflective Practice,6, 491–506.

Czubaj, C. A. (2000). Cyberspace curricula: A global perspective.Journal of Instructional Psychology,27(1), 9–14.

Duncan, R. M. (1995). Piaget and Vygotsky revisited: Dialogue or assimi-lation?Developmental Review,15, 458–472.

Ellinger, A. D. (2004). The concept of self-directed learning and its implica-tions for human resource development.Advances in Developing Human Resources,6, 158–177.

Glenn, L., Jones, C., & Hoyt, J. (2003). The effect of interaction levels on student performance: A comparative analysis of web mediated ver-sus traditional delivery.Journal of Interactive Learning Research,14, 285–299.

Grandzol, J. R. (2004). Teaching MBA statistics online: A pedagogically sound process approach.Journal of Education for Business,79, 237–246. Hiemstra, R., & Brockett, R. G. (1994). Overcoming resistance to self-direction in adult learning.New Directions for Adult and Continuing Education,64, 32–47.

Illeris, K. (2003). Towards a contemporary and comprehensive theory of learning.International Journal of Lifelong Education,22, 396–406. Johnson, S. D., & Aragon, S. R. (2003). An instructional strategy

frame-work for on-line learning environments.New Directions for Adult and Continuing Education,10, 31–44.

Joint, N. (2003). Information literacy evaluation: Moving towards virtual learning environments.The Electronic Library,21, 322–324.

J¨oreskog, K. G., S¨orbom, D., & Wallentin, F. Y. (2006).Latent variable scores and observational residuals. Lincolnwood, IL: Scientific Software International.

Keppel, G., & Wickens, T. D. (2004).Design and analysis: A researcher’s handbook(4th ed.). Upper Saddle River, NJ: Pearson Prentice-Hall. Kessels, J. W. M., & Poell, R. F. (2004). Andragogy and social capital

theory: The implications for human resource development.Advances in

Developing Human Resources,6, 146–157.

(12)

Kienle, A. (2009). Intertwining synchronous and asynchronous communi-cation to support collaborative learning—system design and evaluation. Education and Information Technologies,14(1), 55–79.

Kolb, A. Y., & Kolb, D. A. (2005). Learning styles and learning spaces: Enhancing experiential learning in higher education.Academy of Man-agement Learning & Education,4, 193–212.

Konidari, V., & Abernot, Y. (2007). Creation of a knowledge city in educa-tional institutions: A model for promoting teachers’ collective capacity building.International Journal of Learning and Change,2(1), 51–69. Kosnik, C. (2001). The effects of an inquiry-oriented teacher education

program on a faculty member: Some critical incidents and my journey. Reflective Practice,2(1), 65–80.

Krathwohl, D. R., Bloom, B.S., & Masia, B. B. (1964).Taxonomy of educa-tional objectives: The classification of educaeduca-tional goals. Handbook ii:

Affective domain. New York, NY: David McKay.

Levine, D. M., Stephan, D., Krehbiel, T. C., & Berenson, M. L. (2005).

Statistics for managers using Microsoft Excel(4th ed.). Upper Saddle

River, NJ: Pearson Prentice-Hall.

McLaren, C. H. (2004). A comparison of student persistence and perfor-mance in online and classroom business statistics experiences.Decision Sciences Journal of Innovative Education,2(1), 1–10.

Mooij, T. (2009). Education and ICT-based self-regulation in learning: The-ory, design and implementation.Education and Information Technolo-gies,14(1), 3–27.

Nonaka, I., & Konno, N. (1998). The concept of Ba: Building a foundation for knowledge creation.California Management Review,40, 40–54. Nonaka, I., & Teece, D. (Eds.). (2001). Managing industrial

knowledge—creation, transfer and utilization. London, England: Sage. Nonaka, I., Toyama, R., & Konno, N. (2001). Seci, Ba, and leadership: A

unified model of dynamic knowledge creation. In I. Nonaka & D. Teece (Eds.),Managing industrial knowledge creation, transfer and utilization (pp. 13–43). Thousand Oaks, CA: Sage.

Olson, T. M., & Wisher, R. A. (2002). The effectiveness of web-based instruction: An initial inquiry.International Review of Research in Open and Distance Learning,3, 103–112.

Pask, G. (1975).Conversation, cognition, and learning. New York, NY: Elsevier.

Pask, G., Kallikourdis, D., & Scott, B. C. E. (1975). The representation of knowables.International Journal of Man-Machine Studies,17, 15–134. Ponton, M. K., Derrick, M.G., & Carr, P. B. (2005). The relationship between

resourcefulness and persistence in adult autonomous learning.Adult Ed-ucation Quarterly,55, 116–128.

Rovai, A. P., Wighting, M. J., Baker, J. D., & Grooms, L. D. (2009). Devel-opment of an instrument to measure perceived cognitive, affective, and psychomotor learning in traditional and virtual classroom higher educa-tion settings.Internet and Higher Education,12, 7–13.

Russell, T. L. (2002).The no significant difference phenomenon as reported in 355 research reports, summaries and papers: A comparative research annotated bibliography on technology for distance education [updated from 1999]. Raleigh, NC: North Carolina State University.

Schunk, D. H. (2004).Learning theories: An educational perspective. Upper Saddle River, NJ: Pearson Prentice-Hall.

Sherry, L., Billig, S. H., & Tavalin, F. (2000). Good online conversation: Building on research to inform practice.Journal of Interactive Learning Research,11(1), 85–127.

Snee, R. D. (1973). Some aspects of nonorthogonal data analysis, part 1. Developing prediction equations.Journal of Quality Technology,5, 67–79.

Stacey, E., & Rice, M. (2002). Evaluating an online learning environment. Australian Journal of Educational Technology,18, 323–340.

Strang, K. D. (2008). Quantitative online student profiling to forecast aca-demic outcome from learning styles using dendrogram decision models. Multicultural Education & Technology Journal,2, 215–244.

Strang, K. D. (2009a). How multicultural learning approach impacts grade for international university students in a business course.Asian English Foreign Language Journal Quarterly,11, 271–292.

Strang, K. D. (2009b). Using recursive regression to explore nonlinear rela-tionships and interactions: A tutorial applied to a multicultural education study.Practical Assessment, Research & Evaluation,14(3), 1–13. Strang, K. D. (2010a). Balanced assessment of flexible e-learning versus

face-to-face campus delivery at an Australian university. In S. Mukerji & P. Tripathi (Eds.),Cases on technological adaptability and

transna-tional learning: Issues and challenges (Chapter3). London, England:

Information Sciences International.

Strang, K. D. (2010b).Effectively teach professionals online: Explaining and testing educational psychology theories(2nd ed.). Saarbr¨ucken, Germany: VDM Publishing.

Strang, K. D. (2010c). Knowledge articulation dialog increases online uni-versity science course outcomes.Journal of Education and Information Technology,15, 92–107.

Tallent-Runnels, M. K., Thomas, J. A., Lan, W. Y., Cooper, S., Ahern, T. C., & Shaw, S. M. (2006). Teaching courses online: A review of the research. Review of Educational Research,76, 93–135.

Tamhane, A. C., & Dunlop, D. D. (2000).Statistics and data analysis from elementary to intermediate. Upper Saddle River, NJ: Prentice-Hall. Tan, B. T. (2003). Does talking with peers help learning? The role of expertise

and talk in convergent group discussion tasks.Journal of English for Academic Purposes,2, 53–66.

Tatsis, K., & Koleza, E. (2006). The effect of students’ roles on the establish-ment of shared knowledge during collaborative problem solving: A case study from the field of mathematics.Social Psychology of Education,9, 443–460.

Tatsis, K., & Koleza, E. (2008). Social and sociomathematical norms in collaborative problem solving.European Journal of Teacher Education, 31(1), 89–100.

Tsai, P.-J., Hwang, G.-J., Tseng, J. C. R., & Hwang, G.-H. (2008). Computer-assisted approach to conducting cooperative learning process. Interna-tional Journal of Distance Education Technologies,6(1), 49–66. Webb, H. W., Gill, G., & Poe, G. (2005). Teaching with the case method

online: Pure versus hybrid approaches.Decision Sciences Journal of In-novative Education,3, 223–250.

Wertsch, J. V. (1998).Mind as action. New York, NY: Oxford University Press.

Wise, A. F., Padmanabhana, P., & Duffy, T. M. (2009). Connecting online learners with diverse local practices: The design of effective common reference points for conversation.Distance Education Journal,30, 317– 338.

Zuber-Skerritt, O. (1993). Improving learning and teaching through action learning and action research.Higher Education Research and Develop-ment,12(1), 45–58.

Gambar

FIGURE 1Hypothetical model of asynchronous interaction impact on performance.
TABLE 1
TABLE 3
TABLE 4
+2

Referensi

Garis besar

Dokumen terkait

Through the dialogues analysis above, it can be seen that Nora uses positive politeness strategy, negative politeness strategy, on record strategy and also off record strategy in

Several sites have discussion forums in which you can find all sorts of tales from construction workers that gave up their jobs to do surveys on handbags to stay-at-home moms that

Victor Wallis, IUPUI associate professor of political science and director of the Lima Overseas Study Program; IUPUI Political Science Student Association Roundtable Discussion at 11:30

CONCLUSION Based on the findings and discussion section, it is clear that applying blended synchronous and asynchronous learning modes in the online learning provides students some

newsgroup newsgroup /njuz|$rup/ noun a feature of the Internet that provides free-for-all discussion forums newsletter newsletter /njuzletə/ noun a brief publication issued by an

E-learning as just any teaching and learning that uses electronic circuits LAN, WAN, or internet to convey the contents of learning, interaction, or guidance.. E-learning as

Asynchronous Learning Moreover, the study also revealed that, statistically among the indicators of student satisfaction in flexible learning, instructor’s characteristics is

10950 1 Involving community members who are considered experts and independent in teams or working groups or group discussion forums FGD prior to the preparation of academic texts