Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Journal of Education for Business
ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20
Comparison of Student Performance in
Paper-Based Versus Computer-Paper-Based Testing
Bridget Anakwe
To cite this article: Bridget Anakwe (2008) Comparison of Student Performance in Paper-Based Versus Computer-Paper-Based Testing, Journal of Education for Business, 84:1, 13-17, DOI: 10.3200/JOEB.84.1.13-17
To link to this article: http://dx.doi.org/10.3200/JOEB.84.1.13-17
Published online: 07 Aug 2010.
Submit your article to this journal
Article views: 67
View related articles
eb-based instruction in higher educationhasgrowntremendous-ly.AmajoruseofWeb-basedinstruction istoenhancetraditionalin-classcourses (Web-enhanced). An online course is taught entirely over the Internet, and there is usually little or no face-to-face contactwiththeinstructororclassmates. However,Web-enhancedcoursesaretra-ditional classroom courses whose con-tent is augmented by aWeb site where theinstructorplacesadditionalinforma-tion, assignments, grades, and special linksforthestudentstoaccessatspecific times.Manyuniversitiesaresignificantly investing in course management soft-ware and in training and support capa-bilitiestointroduceWeb-enhancementto traditional courses. Faculty are embrac-ingthesetoolsaswellandareinvesting significant time and energy into adding Web-based supplements to their tradi-tionalcourses(Pack,Jackson,Laughner, Thomas, & Wheeler, 2003; Wheeler & Jarboe,2001).
Four fundamental components are necessary for educators to successfully Web-enhance a course: administration, assessment, content, and community (Schmidt, 2002). The present article focuses on only the assessment com-ponent. The assessment component addresseshowastudent’sperformance and learning can be assessed via the Internet. One advantage of assessing studentlearningonlineistheabilityto
provideinstantfeedbacktothestudent. Students like the speed of grading and feedback that online learning provides (Ricketts & Sibley, 2002). Because questions are submitted and graded online, the instructor does not have to collect, correct, or even record grades. This situation reduces the need for remediation during valuable class time andhelpsteachersandstudentstospend more time on new or more advanced subjectmatter(Wingard,2004).Online testing also makes it easy to provide repeated testing opportunities for prac-tice purposes. Multiple-choice, true or false,andmatchingitemscanbeeasily administeredthroughtheInternet.
Inthepresentresearch,Iinvestigated the impact of assessment methods on student performance on examinations. Specifically, analysis of variance was used to determine whether the use of computer-basedtestsversuspaper-based tests affects students’ performance on examinations in traditional account-ing courses. I included two indepen-dent variables, gender and accounting course,ascovariates.
PriorResearch
Priorresearchershaveprimarilycom-pared the performance of students in onlinecourseswiththeperformanceof students in traditional courses ( tradi-tional students) and have found mixed results. Most of these researchers have
ComparisonofStudentPerformance
inPaper-BasedVersusComputer-
BasedTesting
BRIDGETANAKWE
DELAWARESTATEUNIVERSITY DOVER,DELAWARE
W
ABSTRACT. Theauthorinvestigatedtheimpactofassessmentmethodson studentperformanceonaccountingtests. Specifically,theauthorusedanalysisof variancetodeterminewhethertheuseof computer-basedtestsinsteadofpaper-based testsaffectsstudents’traditionaltestscores inaccountingexaminations.Theauthor included2independentvariables,student genderandstudentclass,ascovariates.The findingsindicatethattherewasnosignifi-cantdifferenceinthevaluesofthestudents’ performanceaccordingtothe2methods ofassessment.Thefindingsalsorevealed thatneitherstudentgendernorclasswas correlatedtothetestscoresineitherform oftesting.
Keywords:assessments,in-class,online,tests
Copyright©2008HeldrefPublications
found no differences in learning out- comes.SchulmanandSims(1999)con-cluded that the learning of online stu-dentsisequaltothelearningofin-class students.InSchulmanandSims’study, theycomparedtheperformanceofstu-dentswhowereenrolledinfivedifferent undergraduate online courses with that ofstudentswhowereenrolledintradi-tionalin-classcoursesthatweretaught by the same instructors.Dellana, Col-lins, and West (2000) also determined thattheperformanceofonlinestudents is not significantly different from the performance of traditional students for final student scores in an undergradu-atemanagementsciencecourse.Gagne andShepherd(2001)analyzedtheper-formance of two class sections in an introductory graduate-level accounting course. One section was a traditional, campus-based class taught in the con-ventional face-to-face lecture mode. Theotherwastaughtinadistanceedu-cation format, and the students had no face-to-face contact with the instructor oreachother.Theexaminationsforboth sections were administered in paper-basedformats:Theonlinestudentswere requiredtohavetheexaminationsproc-tored. Gagne and Shepherd concluded thattheperformanceofstudentsinthe distance course was similar to the per-formanceofstudentsintheon-campus course.The results of Rivera and Rice (2002),NoyesandGarland(2003),and WarrenandHolloman(2005)havealso revealed that there are no significant differences in the students’ outcomes between a face-to-face class and an onlineclass.Bonham(2001)compared the performances of classes that used computer-generated homework with those that used the traditional written format. Bonham found no major dif-ferences between the classes that used online homework delivery and the classesthatcompletedthesameassign-ments and were graded by traditional papermethods.
Schulte (1998) discovered that online students scored significantly better than traditional classroom stu-dents on exams in a social statistics course. Four question types—match-ing, objective, definitions, and prob-lems—were used in the study. The results were consistent on both the
midterm examination and the final examination for all four question types. Clariana and Wallace (2002) conductedastudyonpaper-basedand computer-basedassessmentsinwhich they compared the test performances of freshman business undergraduate studentswhotookeachtype.Thestu-dents were given 100 identical mul-tiple-choicequestionsthattestedfacts andconceptsrelatedtogeneralknowl-edge of computers that was covered in class lectures and course readings. Clariana and Wallace found that stu-dents who took the computer-based assessment achieved better results than those who took the paper-based assessment. The results suggest that higher-attaining students benefited mostfromthecomputer-basedassess-ment.ClarianaandWallaceusedonly oneposttest.
However, Brown and Liedholm (2002) found that traditional students perform significantly better on exams than do online students in a microeco-nomicscourse.Grandzol,Eckerson,and Grandzol(2004)usedamasterofbusi-ness administration statistics course in theirstudyofthecomparabilityofout- comesinonlineandtraditionalenviron-ments. The students in the study each took three graded assessments. Grand-zol et al. found that the online student performancewaslowerinsomedimen-sions of learning. Thet tests suggest thattherewerenosignificantdifference inthemeantestscoresforthestudents in the midterm examination, but there weresignificantdifferencesforthefinal examinationandtheresearchpaper.
Some other researchers have exam-ined students’ attitudes toward online testing and concluded that students haveapositiveattitudetowardit.Peat (2000) concluded that students liked online assessment because they could take advantage of the accessibility of the online assessment tasks from a varietyoflocations.Thestudentsalso tion and that there was no difference in learning. Wilson-Jones and Caston
(2006)examinedtheattitudeofunder- graduateeducationmajorstowardWeb-enhanced and traditional instructions. Theylookedattwoeducationclasses: One used Web-enhanced instruction, whereas the other used face-to-face instruction.Theirfindingsshowedthat 55% of the students preferred taking tests in class, whereas 45% preferred onlinetests.However,thestudydidnot comparetestperformancebetweenthe twoclasses.
The present study differs from prior work in that it addresses the use of online testing methods for traditional in-class courses. I sought to find out whether traditional students taking undergraduate accounting courses per-formbetterwhentheyaregivenonline assessmentthanwhentheytakethetests using traditional in-class testing meth-ods. The students were tested multiple times by using computer- and paper-based assessments methods during the courseofthisstudy.
METHOD
Participants
Participantswere75studentsenrolled in any of three specific undergraduate accountingcoursesatasmallNortheast-ernuniversityinthespringof2005.The students were mainly sophomores and juniors.Thefirstcourse,anintroductory accounting class (Accounting I), was requiredofallthestudentsintheSchool ofManagement.Thecoursesectionthat I used in this study had 33 registered students.Thesecondcourse,Accounting for Decision Making, was required for nonaccounting majors in the School of Management and had 27 students reg-isteredinthissection.Thethirdcourse, intermediateAccountingII,wasafollow- uptoIntermediateAccountingIandwas required of all accounting majors. This course had only one section with 15 registered students. The same professor taughtalltheclassesinthepresentstudy. tests—two of which were computer-
based, whereas the other two were paper-based—and one final compre-hensive in-class examination. Each of the four tests comprised 25 multiple-choice questions selected from various publishers’ textbooks. All the students received the same test questions. The questionsrelatedtocoursecontentthat was already covered by the instructor intheclassroom,andthestudentswere informedthatthetestscountedforcourse grades. The grading policy was also clearly stated in the course syllabi and Web sites. The students were allowed 50 min to take each of the tests. The final examination comprised problem- type and essay questions and were administeredin-classinthepencil-and-paperformat.
Thefirstandthirdtestswereonline. In the computer-based assessments, thetestswereadministeredonlineina computerized classroom and proctored by the instructor.All the students took the online tests at the same time. The platform used for the computer-based assessmentwasBlackboardbecausethe university subscribes to this software and provides the necessary infrastruc-ture and support. The multiple-choice questionswererandomlypresentedone atatime,andthestudentswereallowed tobacktrackandrevieworchangetheir answerstopreviousquestions.Students werealsoallowedtoskipquestionsand return to them later. On completion, all online examinations were graded instantly, and the students were pro-videdimmediatefeedback.
Thesecondandfourthtestswerein-class assessments and were adminis-teredinpencil-and-paperformat.There were4–6questionsoneachpage.Stu-dentswereabletoanswerthequestions inanyorderandtoreviewandchange their answers prior to submitting their paper.Theinstructorgradedthein-class testsandreturnedthemtothestudents inthefollowingclasssession.
RESULTS
Of the 75 students initially reg-istered for the classes, 21 were not included in the final sample, either because they had withdrawn from the classes or missed one or more of the
fourteststhatIhadadministereddur-ingthesemester.Thus,thetestresults of 54 students were used in the anal-ysis. Students’ performances were analyzed separately for each of the threeclasses.
Table1showstheaveragetestgrades forthethreeclassesforboththein-class tests and the online tests. Accounting I students performed better on the in-class tests than did the students in the other classes, whereas the Accounting forDecisionMakingstudentswerethe did the students in theAccounting for DecisionMakingclass.TheIntermedi- ateAccountingIIstudentshadthelow-estoveralltestaverage.Anexamination ofpriorsemesters’testsresultsshowed thatstudentsinIntermediateAccounting II consistently had lower average test scores than did students inAccounting IandAccountingforDecisionMaking, which are both introductory account-ing courses. This findaccount-ing is consistent with Reed and Holley’s (1989) finding that students’ test grades tend to drop as the students progresses through the sequence of accounting courses. Thus, theoverallaverageofthestudentswho completed Introductory Accounting was higher than that for Intermediate AccountingIIstudents.
Table1alsoshowsthefinalexamina-tion test averages for all three courses. Accounting for Decision Making had the highest average, whereas Inter-mediate Accounting II had the lowest test score average. However, I did not compare the final in-class examination score averages with either the online or in-class test score averages because theyhaddifferentquestionformats.The onlineandin-classtestscomprisedmul-tiplechoicequestions,whereasthefinal examinationcomprisedessayandprob-lem-typequestions.
Further analysis did not reveal the existenceofatestlearningcurve.Table 2 shows thatAccounting I students’ averagetestscorewashigheronthefirst online test than on the second, but the test averages for the two in-class tests werethesame.AccountingforDecision Making students’ performance did not varybetweenthefirstandsecondonline tests.However,theirtestscoreaverage was higher on the second in-class test thanonthefirst.IntermediateAccount-ing II students had a lower test aver-age on the first online test than on the second.Thetestscoreaverageforthese studentswasalsohigheronthesecond in-class test than on the first. These observations are consistent with previ-ous semesters’ tests results of courses taughtbytheinstructor.Somestudents performedbetteronthetestsgivenear-lier in the semester than on the later
TABLE1.DescriptiveStatistics(inPercentages)forAccountingI, AccountingforDecisionMaking,andIntermediateAccountingII
Accountingfor Intermediate
Variable AccountingIa DecisionMakingb Accountingc
Gender
Onlinetestsaverage 70 71 61
In-classtestsaverage 74 69 63
Overalltestsaverage 72 70 62
Finalexamaverage 72 75 67
an=20.bn=22.cn=12.
tests, whereas other students did better onlaterteststhanontheearliertests.
Table2alsoshowsthetestaverages by major for the School of Manage-ment and non-School of ManageManage-ment students. The non-School of Manage-mentstudentsperformedbetteronboth the online and in-class tests than did thestudentswhohadaSchoolofMan-agement major. This is especially evi-dent in the Accounting for Decision Making course, where the nonmajors performed significantly better than did the majors. All the students who had registeredinIntermediateAccountingII were School of Management majors. I couldnotcompareaccountingstudents versus nonaccounting students because none of the courses had both types of students registered.AlthoughAccount-ing I was required of both accountregistered.AlthoughAccount-ing and nonaccounting majors, only non-accounting students were registered in the Accounting I section that I tested in this study. Accounting for Decision
Makingisfornonaccountingmajors,so therewerenoaccountingmajorsinthis course. Only accounting majors were registeredintheIntermediateAccount-ingIIcourse.
Thet tests comparing the average testscoresoftheonlineandthein-class revealed that there was no significant difference in test scores between the two assessment formats for all three courses. Accounting had ap value of .2158andt(38)=1.247.ForAccounting
for Decision Making,p = .5715,t(40)
=0.573. For Intermediate Accounting II,p = .6463,t(20) = 0.461. Therefore, I concluded that there was no differ-ence in the students’ mean test score between the online testing format and the in-class testing format.This finding is consistent with Schulman and Sims (1999), Rivera and Rice (2002), Noyes and Garland (2003), and Warren and Holloman(2005),whohavereportedno significant differences in the students’ outcomes between face-to-face classes andonlineclasses.Myfindingisalsoin linewithZhao,Lei,Lai,andTan(2005), who showed that as a whole, distance education has been as effective as its face-to-facecounterpart.Bonham(2001) alsofoundnomajordifferencesbetween the classes that used online homework delivery and the classes that completed the same assignments and were graded bytraditionalpapermethods.
I performed an analysis of variance byusingstudentgenderandclassifica-tion as covariates. Table 3 shows the resultsoftheseanalyses.Itrevealsthat student gender and class were not cor-related with the test scores. This find-ing suggests that for the three courses, neitherstudentgendernorstudentclass determined the test score under either testingformat.
DISCUSSION
The study reveals that in three dif-ferent accounting courses, there were no differences in student test scores betweentheonlinetestsandthein-class tests. The study also revealed no cor-relation between a student’s gender or classandthestudent’stestperformance. Thus, instructors may include online testsintheirtraditionalin-classcourses
TABLE2.TestScoreAverages(inPercentages),byMajors,forAccountingI, AccountingforDecisionMaking,andIntermediateAccountingII
Schoolof Non-Schoolof
Measure Management Management Allstudents
AccountingI n=10 n=10 n=22
Onlinetest1 70 74 72
Onlinetest2 66 70 68
Onlinetestsaverage 68 72 70
In-classtest1 74 74 74
In-classtest2 74 74 74
In-classtestsaverage 74 74 74
Totalaverage 71 73 72
AccountingforDecisionMaking n=18 n=4 n=22
Onlinetest1 69 78 71
Onlinetest2 69 78 71
Onlinetestsaverage 69 78 71
In-classtest1 62 78 66
In-classtest2 70 83 72
In-classtestsaverage 66 81 69
Totalaverage 68 80 70
IntermediateAccountingII n=12 — n=10
Onlinetest1 58 — 58
Onlinetest2 64 — 64
Onlinetestsaverage 61 — 61
In-classtest1 60 — 60
In-classtest2 66 — 66
In-classtestsaverage 63 — 63
Totalaverage 62 — 62
TABLE3.AnalysisofCovarianceUsingStudentGenderand
ClassificationforAccountingI,AccountingforDecisionMaking,and IntermediateAccountingII
Accountingfor Intermediate
AccountingI DecisionMaking AccountingII
Variable F df p F df p F df p
Gender 0.158 3,38 .69 4.292 3,40 .14 1.059 3,20 .31 Class 1.025 6,38 .39 0.515 6,40 .14 2.635 6,20 .09
withoutaffectingthestudents’testper-formance, while reaping the benefits ofonlinetesting,whichincludeinstant grading and feedback to the students. When instructors test students online, theyfreeupclasstimethatwouldhave been spent in administering tests. The students and instructors may spend this additional time on new or more advancedsubjectmatter.
Thelimitationsofthisstudyinclude theonlinetestconditionsthatImanipu-lated to make similar to the in-class test conditions. The online tests were takeninacomputerizedclassroomand proctored by the instructor. I used this condition to ensure that there were no incidentsofcheatingintheonlinetests. Allstudentswererequiredtologonand take the online tests at the same time. Thisconditionisdifferentfromnormal online testing conditions in which a testing window is specified, and the studentsareallowedtologinandtake thetestatanytimewithinthatwindow. Futureresearchersmaytrytodetermine whether there is a difference in stu-dent performance in proctored versus unproctoredonlinetests.
The differences in the present test scores may have resulted from the complexity and difficulty of the indi-vidual tests. This study did not have a control group, and thus no conclu-sion can be made about the causes of thosedifferences.Afutureextensionof thisworkmayincludeassigningallthe tests—bothonlineandin-class—totwo randomgroupsofstudentsandcompar-ing the online performance with the in-classperformanceofthestudentsfor thesametest.
Theresultsofthisstudyshowedthat IntermediateAccountingIIhadthelow-est tIntermediateAccountingIIhadthelow-est score averages on the online tests, the in-class tests, and the final examination. Future researchers may
explainwhytheIntermediateAccount-ing II students consistently had test scoresthatweresignificantlylowerthan theothertwocourses.
In addition, the results showed that non-School of Management students performedbetteronboththeonlinetests and the in-class tests than did students registeredintheSchoolofManagement. Further studies are necessary to deter-minewhethertheseresultsareapplicable to other accounting and nonaccounting coursesinschoolsofmanagement.
NOTES
Bridget Anakwe is an associate professor of accountingatDelawareStateUniversity.
Correspondence concerning this article should be addressed to Dr. Bridget Anakwe, Delaware State University, 1200 North Dupont Highway, Dover,DE19901,USA.
E-mail:banakwe@desu.edu
REFERENCES
Bonham, S.W. (2001). Online homework: Does it make a difference?Physics Teacher,39, 293–296.
Brown, B., & Liedholm, C. (2002). Can Web courses replace the classroom in Principles
of Microeconomics? American Economics
Review,92,444–448.
Clariana, R., & Wallace, P. (2002). Paper-based versus computer-based assessment: Key fac-tors associated with the test mode effect. BritishJournalofEducationalTechnology,33, 593–602.
Dellana, S., Collins, W., & West, D. (2000). Online education in a management science course:Effectivenessandperformancefactors. JournalofEducationforBusiness,76,43–47. Gagne, M., & Shepherd, M. (2001). Distance
learning in accounting.T.H.E. Journal, 28(9), 58–65.
Grandzol,J.,Eckerson,C.,&Grandzol,C.(2004). Beyondnosignificantdifference:Differentiat-inglearningoutcomesusingmulti-dimensional contentanalysis.DEOSNEWS13(8).Retrieved July 20, 2005, from http://www.ed.psu.edu/ acsde/deosmain.html
Nichols, J., Shaffer, B., & Shockey, K. (2003). Changing the face of instruction: Is online or in-class more effective?College & Research Libraries,64,387–388.
Noyes, J., & Garland, K. (2003). VDT versus paper-based text: Reply to Mayes, Sims, and
Koonce. International Journal of Industrial Ergonomics,31,411–423.
Pack,R.,Jackson,W.,Laughner,T.,Thomas,K., &Wheeler,B.(2003,October).Settinganext generationCMSstrategy.Paperpresentedatthe Educause2003conference,Anaheim,CA. Peat, M. (2000, February).Online assessment:
TheuseofWeb-basedself-assessmentmaterials tosupportself-directedlearning. Paperpresent-edattheFlexibleFuturesTertiaryTeachingand LearningForum,Perth,WesternAustralia. Reed, S., & Holley, J. (1989). The effect of
final examination scheduling on student per-formance.Issues in Accounting Education, 4, 327–344.
Ricketts,C.,&Sibley,D.(2002).Improvingstu-dents performance through computer-based assessments: Insights from recent research. Assessment and Education in Higher Educa-tion,27,476–479.
Rivera, J., & Rice, M. (2002).A comparison of studentoutcomesandsatisfactionbetweentra-ditionalandwebbasedcourseofferings.Online Journal of Distance Learning Administration, 5(3). Retrieved May 20, 2007, from http:// www.westga.edu/%7Edistance/ojdla/fall53/ rivera53.html
Schmidt, K. (2002). The Web-enhanced class-room.JournalofIndustrialTechnology,18(2). RetrievedJuly21,2005,fromhttp://www.nait. org/jit/Articles/schmidt011802.pdf
Schulman,A. H., & Sims, R. L. (1999). Learn-ing in an online format versus an in-class format:Anexperimentalstudy.RetrievedMay 20,2007,fromhttp://www.thejournal.com/the/ printarticle/?id=14147
Schulte, J. (1998).Virtual teaching in higher education.RetrievedJuly20,2005,fromhttp:// www.csun.edu/sociology/virexp.htm
Warren, L., & Holloman, H., Jr. (2005). Online instruction:Aretheoutcomesthesame? Jour-nalofInstructionalPsychology,32,148–151. Wheeler,B.,&Jarboe,G.(2001).Newpollshows
faculty prefer Web-enhanced courses to either classroom-only or distance-only courses: Stu- dentslearningenvironmentmaximizedwithWeb-enhanced classroom instruction; online-only rivalsclassroom-onlyinstruction.RetrievedJuly 20, 2005, from http://www.webct.com/service/ ViewContent?contentID=3522772
Wilson-Jones,L.,&Caston,M.(2006).Attitudes of undergraduate education majors on Web-basedandtraditionalinstructionatFayetteville State University.Journal of Instructional Psy-chology,33,144–146.
Wingard,R.(2004).Classroomteachingchanges inWeb-enhancedcourses:Amulti-institutional study.EducauseQuarterly,27(1),26–35. Zhao,Y.,Lei,J.,Lai,B.,&Tan,H.(2005).What
makes the difference? A practical analysis of researchontheeffectivenessofdistanceeduca-tion.TeachersCollegeRecord,107,1836–1884.