Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Journal of Education for Business
ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20
Assessing Learning Outcomes in Quantitative
Courses: Using Embedded Questions for Direct
Assessment
Barbara A. Price & Cindy H. Randall
To cite this article: Barbara A. Price & Cindy H. Randall (2008) Assessing Learning Outcomes in Quantitative Courses: Using Embedded Questions for Direct Assessment, Journal of Education for Business, 83:5, 288-294, DOI: 10.3200/JOEB.83.5.288-294
To link to this article: http://dx.doi.org/10.3200/JOEB.83.5.288-294
Published online: 07 Aug 2010.
Submit your article to this journal
Article views: 52
View related articles
ABSTRACT.
R
ccreditation helps institutions show that they are attaining an acceptable level of quality within their degreeprograms(Lidtke&Yaverbaum, 2003;Pare,1998;Valacich,2001).Also, accreditation ensures national consis-tencyofprograms,providespeerreview and recognition from outside sources, and brings programs onto the radar screenofpotentialemployers(Rubino, 2001).Tomeetaccreditationstandards, faculty and administrators are respon-sibleforthecontinuousimprovementof degree programs and the measurement and documentation of student perfor-mance (Eastman, Aller, & Superville, 2001). Many colleges and universities rely heavily on program assessment to comply with accreditation and state demands (Eastman et al.; Schwendau, 1995)andtoguidecurriculum(Abuna-wass,Lloyd,&Rudolf,2004;Blaha& Murphy,2001).Assessment to determine whether degreeprogramsareprovidingappropri-ateeducationtograduateshasbecomea key component of most accreditation self-study report requirements and a vehicle that is preferred for account-ability purposes (Earl & Torrance, 2000).Severalaccreditationboardsnow require that colleges set learning goals and then assess how well these goals aremet(Jones&Price,2002).Learning goals that reflect the skills, attitudes,
andknowledgethatstudentsareexpect-ed to acquire as a result of their pro-gramsofstudyarebroadandnoteasily measured.Objectiveoutcomesareclear statements outlining what is expected from students. They can be observed, measured, and used as indicators of goals(Martell&Calderon,2005).
Under the Association to Advance Collegiate Schools of Business Inter-national’s (AACSB’s) new standards (Betters-Reed, Chacko, & Marlina, 2003) and the SouthernAssociation of Colleges and Schools’ (SACS’s) new standards (Commission on Colleges, 2006), business programs will have to set goals to address what skills, attri-butes, and knowledge they want their studentstomasterandmustthenbeable todemonstratethattheirgraduateshave metthesegoals.Establishingandimple-menting a system under which these programscanprovethattheirgraduates havemettheestablishedgoalsisneces-sary under these standards. Any such systemwillhavetorelyonthecreation and measurement of course objectives to serve as indicators that goals are beingmet.
Two basic approaches to assess learning are indirect and direct. Indi-rect approaches gather opinions of the quality and quantity of learning that takes place (Martell & Calderon, 2005). Techniques for gathering data by using indirect assessment include focus groups, exit interviews, and
AssessingLearningOutcomesin
QuantitativeCourses:UsingEmbedded
QuestionsforDirectAssessment
BARBARAA.PRICE CINDYH.RANDALL
GEORGIASOUTHERNUNIVERSITY STATESBORO,GEORGIA
A
ABSTRACT. Researcherscanevaluate learningbyusingdirectandindirectassess-ment.Althoughtherearevariousways toapplytheseapproaches,twocommon techniquesarepretestsandposttests(direct assessment),inwhichstudentsdemonstrate masteryoftopicsorskills,andtheuseof knowledgesurveys(indirectassessment). Thepresentauthorsusedthesetwotech- niquestodemonstratethatstudentknowl- edgeofcoursematerialincreasedsignifi-cantlyduringthesemester.Furthermore, theauthorsdemonstratedthattheindirect knowledgesurveyofperceivedknowledge didnotcorrelatewithactualknowledge.
Keywords:assessment,learningoutcomes, quantitativeclasses
Copyright©2008HeldrefPublications
surveys. One common type of survey is the knowledge survey (Nuhfer & Knipp,2003).Knowledgesurveyscan cover the topics of an entire course— both skills and content knowledge— exhaustively. This coverage is accom-plished through the use of a rating systeminwhichstudentsexpresstheir confidence in providing answers to problemsorissues(Horan,2004).
Using a knowledge survey, the stu-dentrespondedtooneofthreechoices: (a) “You feel confident that you can now answer the question sufficiently forgradedtestpurposes”;(b)“Youcan nowansweratleast50%ofthequestion or you know precisely where you can quickly get the information and return (20 minutes or less) to provide a com-pleteanswerforgradedpurposes”;or(c) “You are not confident you could ade-quately answer the question for graded testpurposesatthistime”(Horan,2004). This method of assessment allows stu-dentstoconsidercomplexproblemsand issues as well as course content knowl-edge(Nuhfer&Knipp,2003).
Incontrast,directassessmentrequires that students demonstrate mastery of topics or skills by using actual work completedbythestudents.Thisrequire-ment can be accomplished by using papers, presentations, speeches, graded assessmentitems,orpretestsandpost-tests.Pretestsandposttestsareprobably the most widely used form of evaluat-ing how students have progressed dur-ingthesemester(OutcomeAssessment, 2003).Thismethodsurveysstudentsat thebeginningandendofacourse.With standard pretests and posttests, stu-dentscancompletethesamequizatthe beginningandendofthecourse,anda gradecanbecomputedtoillustratehow much students learned. Critics believe this approach is limiting because time alonedictatestheamountofmaterialon whichstudentscanbetested(Nuhfer& Knipp,2003).Proponentsfeelthatthese tests are specifically designed to coin-cide with the curriculum of the course and can focus on the missions, goals, andobjectivesofthedepartmentoruni-versity(OutcomeAssessment,2003).
Regardless of which of the direct methods is used, educators can mea-sure the progress of students by using course-embedded assessment.
Course-embedded assessment, a cutting-edge formalized assessment (Gerretson & Golson, 2005), requires that the prod-ucts of students’ work be evaluated by using those criteria and standards established in the course objectives. It tends to be informal but well orga-nized(Treagust,Jacobowitz,Gallagher, & Parker, 2003). By embedding, the opportunities to assess progress made by students are integrated into regular instructional material and are indistin-guishable from day-to-day classroom activities(Keenan-Takagi,2000;Wilson & Sloane, 2000). The results are then shared with the faculty so that learn-ing and curriculum can be improved. Thistechniqueisefficientandinsightful (Martell & Calderon, 2005) and guar- anteesconsistencywithinmultiplesec-tions of the same course by using the same outcomes and rubrics (Gerretson &Golson,2005).
Hypotheses
The goal of the present study was to provide insight on the use of direct versus indirect techniques as means of assessing student learning, with the hope that these findings can be used as input to course improvement as well as assessment and accreditation self-studies. To accomplish this goal, we asked students at a university who were enrolled in Management 6330 during the 2004–2005 academic year to participate in a knowledge survey project including a pretest and post-test validity check. Management 6330, or Quantitative Methods for Business, is an introductory course in statistics and management science techniques requiredforstudentsenteringtheMBA or MAcc degree programs who have eithernotacquiredtheknowledgefrom a BA degree program or have paused for some time since taking decision analysis courses. Using these students’ scores, we compared pretest and post-testscoresandknowledgesurveyscores on a question-by-question basis.Addi- tionally,pretestandposttestandbefore-and-afterknowledgesurveyscoreswere compared. Last, the class averages on bothinstrumentswerecomparedforthe datagatheredatthebeginningandthen attheendofthesemester.
Westudiedthefollowinghypotheses: 1.At the beginning of a course, stu-dents’ knowledge and actual knowl-edgearemutuallyindependent.
2.Attheendofacourse,students’per-ceived knowledge and actual knowl-edgearerelated.
3.Students’ perceived knowledge is significantly greater at the end of a course than at the beginning of a course.
4.Students’actualknowledgeissignifi-cantlygreaterattheendofacourse thanatthebeginningofacourse. 5.Average perceived knowledge for
studentsissignificantlygreateratthe endofacoursethanatthebeginning ofacourse.
6.Average actual knowledge for stu-dents is significantly greater at the endofacoursethanatthebeginning ofacourse.
METHOD
During the 2004–2005 academic year,Dr.DavidW.Robinsonconducted aknowledgesurveytrialatauniversity in the southeastern United States and invited all faculty members to partici-pate.Thosewhochosetodosocreated alistofquestionsthatcomprehensively expressed the content of their classes. Then,Robinson(2004)usedtheseques-tions to construct a knowledge survey instrument. One class whose professor chose to participate in the trial was Management 6330, Quantitative Meth-ods for Business. This class is taught everysemester.
Duringthefall2004andspring2005 semesters, students enrolled in Man-agement 6330 were participants in the knowledge survey project. As a par-ticipant in this project, each student completed a Web-based survey during the first class. The survey asked each studenttoindicateconfidenceinbeing able to answer questions on material that would be covered over the course ofthesemester.Attheendofthesemes-ter, each student completed the same survey,providingameanstoassessthe learning that occurred over the semes-ter. These surveys were administered via the Web and did not count in the student’s course average. The faculty memberteachingtheclassdidnothave
access to the survey results until after thesemesterended.
One problem with surveys in which studentsareaskediftheyhaveadequate knowledge without having to prove knowledgeisthatsomestudentsexhib-it overconfidence (Nuhfer & Knipp, 2003).To overcome this problem, dur-ing the second night of class each stu- dentreceivedthesamepretestandactu-ally solved the test problems. Another problem often encountered is that stu-dents fail to take the test seriously if no incentive is attached (THEC Per-formanceFunding,2003).Infall2004, thisactivitydidnotcountaspartofthe student’s final grade; however, with an overallscoreof70%orhigher,thestu-dentcouldelecttoexemptManagement 6330. If the student remained in the course,thissametestwasadministered attheendofthefallsemester.Thescore onthisexamaccountedfor10%ofthe student’sfinalclassaverage.
In spring 2005, Management 6330 studentsagainchosetoparticipateinthe assessment study.After the initial trial duringthepriorsemester,theprofessor refinedboththesurveyandtheprocess. One change involved proof of compe-tency for Management 6330. Instead of exempting the course with an over-all passing grade (70 or above) on the pretest, the students had to score a 70 orhigherineachofthesixcompetency areas (descriptive/graphical analysis, probability, inference, decision analy-sis, linear programming, and quality control processes). The second change involved the posttest. Students in the fall semester complained about the numberoftestsfacingthemattheend of the course. In the spring, instead of giving a separate posttest that counted aspartofthefinalexam,theprofessor embeddedarandomselectionofpretest questions from each of the six compe-tency areas into the final exam. These questions,whichaccountedforroughly half of the original pretest questions, were compared with the pretest score forassessment.
RESULTS
AssessmentofstudentsinManage-ment6330beganonthefirstnightof class.Although at the end of the fall
semester class enrollment showed a total of 29 students, some enrolled late. Therefore, only 23 completed both the pretest and posttest knowl-edge survey instrument.Again in the spring, students enrolled late, and some did not complete the pretest knowledge survey instrument. Of the 25 students who finished the course, only 17 completed both the pretest andposttestknowledgesurveyinstru-ments.Therefore,inthefallandspring semesters,40studentscompletedboth pretestandposttestknowledgesurvey instruments. A total of 54 students completed the pretest and posttest by solvingproblems.
Because we recorded student assess-ment of perceived knowledge by using ordinal data and per-question actual knowledge by using binary data (0 = incorrect, 1 =correct), nonparametric methods for statistical procedures were used to test five of the six hypotheses. Hypothesis1wasaddressedbyusingrank correlationsinwhichSpearman’srhowas calculatedtotestsignificance.Theauthors testedthefollowinghypotheses:
H0: At the beginning of the semester, a positive or negative relationship between the measures of students’ perceived knowledge and actual knowledgeexists.
H1: At the beginning of the semester, the measures of students’ perceived knowledgeandactualknowledgeare mutuallyindependent.
Twenty-one of the 23 students who completed the pretest assessments for perceived and actual knowledge at the beginning of fall semester and 13 of the 17 who completed the pretest assessments for perceived and actual knowledge in the spring semester pro-duced results showing no significant relationshipbetweenthetwomeasures. Two students in the fall and 4 in the spring revealed a significant relation-ship between what they believed they knew and what they actually knew, 3 at the .05 level of significance and the others at the .10 level of significance (seeTable1).
Theresultsindicatedthatatthebegin-ningofthesemestermoststudentscould not accurately assess their levels of existingknowledge.Ofthoseassessed,
85% showed no significant relation-shipbetweentheirperceivedknowledge and actual knowledge of the subject. Inotherwords,atthebeginningofthe semester, the students were unable to determine the difference between per-ceived knowledge and actual knowl-edge.Therefore,H0cannotberejected. Hypothesis1issupported.
We also addressed Hypothesis 2 by using rank correlations in which Spearman’s rho was calculated to test significance. We tested the following hypotheses:
H0 :Attheendofthesemester,themea-sures of students’ perceived knowl-edge and of their actual knowlknowl-edge aremutuallyindependent.
H2 :Attheendofthesemester,aposi-tiveornegativerelationshipbetween the measures of students’ perceived knowledgeandoftheiractualknowl-edgeexists.
Seventeen of the 23 students who completed the posttest assessments for perceived knowledge and actual knowledge during the fall and 12 of the 17 students who completed the posttest assessments for perceived knowledge and actual knowledge in the spring produced test results show-ingnosignificantrelationshipbetween the two measures. Only 6 students in the fall and 5 in the spring revealed a significant relationship between what theybelievedtheyknewandwhatthey actually did know, 5 at the .01 level of significance, 4 at the .05 level of significance, and 2 at the .10 level of significance(seeTable2).
Attheendofbothsemesters,moststu- dentswerenotaccurateintheirassess-mentofacquiredknowledge.Although a slight improvement occurred, by the endofthesemestermoststudentswere stillunabletodeterminethedifference betweenperceivedknowledgeandactu-al knowledge. Just over 72% of those assessed after they had completed the course showed no significant relation-shipbetweenperceivedknowledgeand actualknowledgeofthesubject.There-fore,H0 cannotberejectedandHypoth-esis2isnotsupported.
Hypothesis 3 compared perceived knowledge at the beginning of the semestertoperceivedknowledgeatthe
endofthesemester.Becausedatafrom the knowledge survey were ordinal, with students’ responding to one of three choices, sign tests were used to test the differences between the pretest assessment and the posttest assessment. We tested the following hypotheses:
H0 :Attheendofthesemester,thestu-dents’ perceived knowledge are not greaterthanatthebeginning.
H3:Attheendofthesemester,students’ perceived knowledge is significantly greaterthanatthebeginning. We compared assessment results for 40 (23 fall and 17 spring) stu-dents.Inallcases,students’perceived knowledgeattheendofthesemester was significantly greater at the .01 level of significance than their per-ceivedknowledgeatthebeginningof thesemester(seeFigure1).Analyses failed to support the null hypothesis (H0). Therefore, Hypothesis 3 was supported.
Hypothesis4theorizesthatstudents’ actual knowledge at the end of the semesterissignificantlygreaterthanat thebeginning.Signtestswereusedfor
thisanalysis.Thefollowinghypotheses weretested:
H0: The difference between actual knowledgeattheendofthesemester and actual knowledge at the begin-ningisnotsignificant.
H4:Attheendofthesemester,students’ actual of knowledge is significantly greaterthanatthebeginning.
Because this pretest assessment was administered on the second night of classandallmembersoftheclasswere present,atotalof29studentsinthefall and 25 in the spring took this pretest assessment.Ofthe54studentsassessed, 44demonstratedthattheiractualknowl-edge improved significantly over the courseofthesemester(seeTable3).
More than three fourths of those assessed (81.48%) gained a significant amount of knowledge of the subject overthecourseofthesemester(seeFig-ure2).Onthebasisofthesetestresults, we rejected the null hypothesis (H0). Hypothesis4wassupported.
TABLE1.RelationshipofPerceivedandActualKnowledgeattheBeginning oftheSemester
Semester Spearman’srho p
Spring2005 .190 .021
Fall2004 .341 .034
Spring2005 .196 .037
Spring2005 .234 .081
Spring2005 .258 .091
Fall2004 .268 .099
TABLE2.RelationshipofPerceivedandActualKnowledgeattheEnd oftheSemester
Fall2004Students Spring2005Students
Spearman’srho p Spearman’srho p
.478 .002 .404 .010
.465 .003 .378 .016
.456 .004 .342 .031
.456 .004 .329 .038
.378 .018 .283 .077
.305 .059
FIGURE1.Perceivedknowledgeatthebeginningandendofthefall semester.KSA=posttestforknowledgesurvey;KSB=pretestfor knowledgesurvey;Q=questionnumber.
KSB1 KSA1
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0
ConfidenceInde
x
Q1 Q3 Q5 Q7 Q9 Q11 Q13 Q15 Q17 Q19 Q21 Q23 Q27 Q29 Q31 Q33 Q35 Q39 Q42 Q44
Hypothesis 5 examined the differ- encebetweentheaveragescoresofpre-tests and those of postencebetweentheaveragescoresofpre-tests regarding perceived knowledge.This comparison was made using the Wilcoxon signed rankstest(Conover,1971).Thefollow-inghypothesesweretested:
H0: On the average, perceived knowl-edge does not appear to be greater at the end of the semester than per-ceivedknowledgeatthebeginningof thesemester.
H5: On the average, perceived knowl-edge appears to be significantly greaterattheendofthesemesterthan perceivedknowledgeatthebeginning ofthesemester.
For33ofthe40studentswhocom-pleted the pretest and posttest knowl-edge surveys, average scores on
per-ceived knowledge after the course was completedwere higherthan those before the course began. Average assessment scores of 6 students in the fall class were the same in the pretest and posttest results. Only 1 student (fallsemester)hadalowerscoreatthe end of the course (see Figure 3). The Wilcoxon signed ranks test (Conover, 1971) indicted that the difference in pretestandposttestaverageassessment scores regarding perceived knowledge at the .01 level of significance in the fallandatthe.00levelofsignificance inthespring.
Morethan80%ofthestudentsdem-onstrated a significantly greater degree of perceived knowledge of class mate-rialattheendofthesemester.Thisdoes not support the null hypothesis (H0). Hypothesis5wassupported.
Hypothesis 6 questioned the differ-ence in the average actual knowledge gained over the course of the semes-ter.Forthisassessment,questionswere weightedonthebasisoftheirdifficulty, and results were at the ratio level. A pairedt test was used.The hypotheses testedwerethefollowing:
H0:Onaverage,actualknowledgedoes notappeartobegreaterattheendof thesemesterthanactualknowledgeat thebeginningofthesemester. H6:Onaverage,actualknowledgeappears
tobesignificantlygreaterattheendof thesemesterthanactualknowledgeat thebeginningofthesemester.
We tested students on course con-cepts at the beginning and the end of thesemester.Wecomparedtheaverage testscoresandfoundthatthedifference inthepretestandtheposttestatthe.01 levelofsignificanceinthefallandatthe .00 level of significance in the spring. On average, students demonstrated a significant gain in actual knowledge over the course of the semester (see Figure4).
Onthebasisofthesignificantttest results,weconcludedthatstudentsdid performsignificantlybetterattheend of the semester. Therefore, the null hypothesis(H0 )wasrejected.Hypoth-esis6wassupported.
DISCUSSION
Colleges and universities wishing to attain and maintain accreditation, demonstrate compliance with state and federal guidelines, and direct curricu-lumrelyontheassessmentofstudents. Assessment is one means of exhibit-ing that learnexhibit-ing is takexhibit-ing place in the classroom.Theassessmentscanbecon-ducted in various ways; two common waysarethrough(a)theuseofpretests andposttestsinwhichstudentsdemon-stratemasteryoftopicsorskillsand(b) the use of knowledge surveys. In the presentstudy,weusedbothassessment techniques to determine whether stu-dentswerelearning.
Assessmentisanecessarytoolwith which schools can exhibit compliance with accreditation, state, and federal guidelines. It is not easy to imple-ment, and it is time consuming. Once TABLE3.ComparisonofActualKnowledgeattheEndandBeginning
oftheSemester
Numberofstudents
whoseactualknowledge %oftotalnumberof p significantlyincreased studentsevaluated
.01 27 50.00
.05 9 16.67
.10 8 14.81
Notsignificant 10 18.52
FIGURE2.Actualknowledgeatthebeginningandendofthesemester. Pre=pretestforactualknowledge;Post=posttestforactualnumber; Q=questionnumber.
Pre Post
1.5
1.0
0.5
0
Correct/Incorrect
Q1 Q3 Q5 Q7 Q9 Q11 Q13 Q15 Q17 Q19 Q21 Q23 Q27 Q29 Q31 Q33 Q35 Q39 Q42 Q44
FIGURE3.Averageperceivedknowledgeatthebeginningandendofthe semester.AvgKA=averagescoreforposttest,perceivedknowledge;Avg KB=averagescoreforpretest,perceivedknowledge;Q=questionnumber.
AvgKB AvgKA
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0
A
vgConfidenceInde
x
1 2 3 4 5 6 7 8 9 1011121314 151617181920212223242526272829
FIGURE4.Averageactualknowledgeatthebeginningandendofthe semester.
Pretestactualknowledge 14
TestScore
≤0 0–10 10–20 20–30 30–40 40–50 50–60 60–70 70–80 80–9090–100 >100
12
10
8
6 4
2
0
NumberofStudents
Postestactualknowledge 14
TestScore
≤0 0–10 10–20 20–30 30–40 40–50 50–60 60–70 70–80 80–9090–100 >100
12
10
8
6
4
2
0
NumberofStudents
an assessment test has been created, it must be evaluated and fine-tuned each semester; however, the benefits morethanoffsetthetimeandeffortthat assessmentrequires.
Posttest assessment can be used to revise course content so that areas in whichstudentsareweakcanbeempha-sized. Similarly, pretest results can identify areas in which students have priorknowledge,andteacherscandedi-cate less class time to those topics. In short,boththeteacherandthestudents can benefit from assessment. Faculty shouldembraceassessmentasameans to enhance their course and not view assessmentasanotherhurdleintheroad tocompliance.
Tosuccessfullyusethesetechniques forthisstudy,wehadtoestablishlearn-ing objectives for Management 6330, thecoursethatweusedforthisresearch project. Questions or problems had to becreatedtofocusoncoursetopicsand to enable students to demonstrate that thesegoalshadbeenmet.Theseactivi-tiesweretimeconsuming.
Through pretests and posttests, we assessedbothperceivedknowledgeand actual knowledge of course material. Thesedatawerecomparedatthebegin-ning and the end of the semester and werecomparedagainsteachother.The levelsofperceivedknowledgeandactu-alknowledgeclimbedsignificantlyboth intestingdatastudentbystudentandby examiningtheaverageamountlearned. Students were not able to accurately perceivetheirknowledgelevel.
Is it unusual that the students were not able to accurately perceive their knowledge level? This is a difficult, if not impossible, question to answer. However, Rogers (2006) noted, “as evidence of student learning, indirect methods are not as strong as direct measures because assumptions must be made about what exactly the self- reportmeans.”Theresultsofourstudy indicate that self-reporting does not mean much. Rogers goes on to state that “it is important to remember that allassessmentmethodshavetheirlimi-tations and contain some bias.” The inability of the students to identify their knowledge level implies that to accurately measure learning, direct measuresshouldbeemployed.
NOTES
BarbaraA.Price ,PhD,isaprofessorofquan- titativeanalysisintheCollegeofBusinessAdmin-istrationatGeorgiaSouthernUniversity.Shehas morethan50publicationsinvariousprofessional journals and proceedings including theDecision SciencesJournalofInnovativeEducation,Journal ofEducationforBusiness,Inroads—theSIGCSE Bulletin, andJournal of Information Technology Education.
CindyH.Randallisanassistantprofessorof quantitative analysis in the College of Business Administration at Georgia Southern University. She has published in numerous proceedings as well as in theInternational Journal of Research in Marketing, Journal of Marketing Theory and Practice, Marketing Management Journal, Jour-nalofTransportationManagement,andInroads— theSIGCSEBulletin.
Correspondence concerning this article should be addressed to Cindy H. Randall, Department of Finance and Quantitative Analysis, Georgia Southern University, Box 8151, COBA, States-boro,GA30460,USA.
E-mail:crandall@georgiasouthern.edu
REFERENCES
Abunawass,A.,Lloyd,W.,&Rudolf,E.(2004). COMPASS:ACSprogramassessmentproject. Proceedings,ITICSE,36(3),127–131. Betters-Reed, B. L., Chacko, J., M., &
Mar-lina, D. (2003). Assurance of learning: Small school strategies.Continuous improvement symposium,AACSBconferencesandseminars. RetrievedNovember3,2006,fromhttp://www .aacsb.edu/handouts/CIS03/cis03-prgm.asp Blaha,K.D.,&Murphy,L.C.(2001).Targeting
assessment: How to hit the bull’s eye. Jour-nal of Computing in Small Colleges, 17(2), 106–115.
Commission on Colleges. (2006).Principles of
accreditation:Foundationforqualityenhance-ment by the Southern Association of Colleges and Schools (2002–2006 edition). Retrieved November 3, 2006, from http://www.sacscoc .org/pdf/PrinciplesOfAccreditation.PDF Conover, W. J. (1971).Practical nonparametric
statistics.NewYork:Wiley.
Earl, L., & Torrance, N. (2000). Embedding accountabilityandimprovementintolarge-scale assessment:Whatdifferencedoesitmake?Pea-bodyJournalofEducation,75(4),114–141. Eastman, J. K., Aller, R. C., & Superville, C.
L. (2001).Developing an MBA assessment program: Guidance from the literature and one program’s experience. Retrieved Novem-ber 10, 2006, from http://www.westga.edu/ ~bquest/2001/assess.html
Gerretson,H.,&Golson,E.(2005).Synopsisof the use of course-embedded assessment in a medium sized public university’s general edu-cationprogram.JournalofGeneralEducation, 54(2),139–149.
Horan, S. (2004).Using knowledge surveys to direct the class. Retrieved November 3, 2006, from http://spacegrant.nmsu.edu/NMSU/2004/ horan.pdf.
Jones, L. G., & Price,A. L. (2002). Changes in computer science accreditation. Communica-tionsoftheACM,45(8),99–103.
Keenan-Takagi, K. (2000). Embedding assess- mentinchoralteaching.MusicEducatorsJour-nal,86(4),42–49.
Lidtke,D.K.,&Yaverbaum,G.J.(2003).Devel-oping accreditation for information systems education.IEEE,5(1),41–45.
Martell, K., & Calderon, T. (2005).Assessment of student learning in business schools: Best practice each step of the way.Vol. 1, No. 1. Tallahassee, FL: Association for Institutional Research.
Nuhfer,E.,&Knipp,D. (2003).The knowledge survey:A tool for all reasons.To Improve the Academy,21,59–78.
OutcomeAssessment. (2003).Office of the Pro-vost at The University ofWisconsin–Madison. Retrieved November 10, 2006, from http:// www.provost.wisc.edu/assessment/manual/ manual12.html
Pare, M. A. (Ed.). (1998).Certification and accreditation programs directory: A descrip-tive guide to national voluntary certification and accreditation programs for professionals and institutions (2nd ed.). Farmington Hills, MA:GaleGroup.
Robinson, D. W. (2004).The Georgia Southern knowledgesurveyFAQ.RetrievedJuly1,2004, from http://ogeechee.litphil.georgiasouthern. edu/nuncio/faq.php
Rubino, F. J. (2001). Survey highlights impor-tance of accreditation for engineers.ASHRAE Insight,16(7),27–31.
Rogers, G. (2006). Assessment 101: direct and indirect assessments: what are they good for? RetrievedMay8,2008,fromhttp://www.abet. org/Linked%20Documents-UPDATE/Newslet ters/06-08-CM.pdf
Schwendau, M. (1995). College quality assess-ment: The double-edged sword.Tech Direc-tions,54(9),30–32.
THECPerformanceFunding.(2003).Pilotevalu-ation:Assessmentofgeneraleducationlearning outcomes [Standard I.B. 2002-03]. Retrieved July 30, 2004, from http://www.state.tn.us/ thec/2004web/division_pages/ppr_pages/pdfs/ Policy/Gen%20Ed%20RSCC%20Pilot.pdf Treagust, D. F., Jacobowitz, R., Gallagher, J. J.,
&Parker,J.(2003).Embedassessmentinyour teaching.ScienceScope,26(6),36–39.
Valacich,J.(2001).Accreditationintheinforma-tionacademicdiscipline.RetrievedNovember 5, 2006, from http://www.aisnet.org/Curricu lum/AIS_AcreditFinal.doc
Wilson, M., & Sloane, K. (2000). From prin-ciples to practice: An embedded assessment system.Applied Measurement in Education, 13(2),181–208.