• Tidak ada hasil yang ditemukan

Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.83.5.288-294

N/A
N/A
Protected

Academic year: 2017

Membagikan "Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.83.5.288-294"

Copied!
8
0
0

Teks penuh

(1)

Full Terms & Conditions of access and use can be found at

http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

Assessing Learning Outcomes in Quantitative

Courses: Using Embedded Questions for Direct

Assessment

Barbara A. Price & Cindy H. Randall

To cite this article: Barbara A. Price & Cindy H. Randall (2008) Assessing Learning Outcomes in Quantitative Courses: Using Embedded Questions for Direct Assessment, Journal of Education for Business, 83:5, 288-294, DOI: 10.3200/JOEB.83.5.288-294

To link to this article: http://dx.doi.org/10.3200/JOEB.83.5.288-294

Published online: 07 Aug 2010.

Submit your article to this journal

Article views: 52

View related articles

(2)

ABSTRACT.

R

ccreditation฀ helps฀ institutions฀ show฀ that฀ they฀ are฀ attaining฀ an฀ acceptable฀ level฀ of฀ quality฀ within฀ their฀ degree฀programs฀(Lidtke฀&฀Yaverbaum,฀ 2003;฀Pare,฀1998;฀Valacich,฀2001).฀Also,฀ accreditation฀ ensures฀ national฀ consis-tency฀of฀programs,฀provides฀peer฀review฀ and฀ recognition฀ from฀ outside฀ sources,฀ and฀ brings฀ programs฀ onto฀ the฀ radar฀ screen฀of฀potential฀employers฀(Rubino,฀ 2001).฀To฀meet฀accreditation฀standards,฀ faculty฀ and฀ administrators฀ are฀ respon-sible฀for฀the฀continuous฀improvement฀of฀ degree฀ programs฀ and฀ the฀ measurement฀ and฀ documentation฀ of฀ student฀ perfor-mance฀ (Eastman,฀ Aller,฀ &฀ Superville,฀ 2001).฀ Many฀ colleges฀ and฀ universities฀ rely฀ heavily฀ on฀ program฀ assessment฀ to฀ comply฀ with฀ accreditation฀ and฀ state฀ demands฀ (Eastman฀ et฀ al.;฀ Schwendau,฀ 1995)฀and฀to฀guide฀curriculum฀(Abuna-wass,฀Lloyd,฀&฀Rudolf,฀2004;฀Blaha฀&฀ Murphy,฀2001).฀

Assessment฀ to฀ determine฀ whether฀ degree฀programs฀are฀providing฀appropri-ate฀education฀to฀graduates฀has฀become฀a฀ key฀ component฀ of฀ most฀ accreditation฀ self-study฀ report฀ requirements฀ and฀ a฀ vehicle฀ that฀ is฀ preferred฀ for฀ account-ability฀ purposes฀ (Earl฀ &฀ Torrance,฀ 2000).฀Several฀accreditation฀boards฀now฀ require฀ that฀ colleges฀ set฀ learning฀ goals฀ and฀ then฀ assess฀ how฀ well฀ these฀ goals฀ are฀met฀(Jones฀&฀Price,฀2002).฀Learning฀ goals฀ that฀ reflect฀ the฀ skills,฀ attitudes,฀

and฀knowledge฀that฀students฀are฀expect-ed฀ to฀ acquire฀ as฀ a฀ result฀ of฀ their฀ pro-grams฀of฀study฀are฀broad฀and฀not฀easily฀ measured.฀Objective฀outcomes฀are฀clear฀ statements฀ outlining฀ what฀ is฀ expected฀ from฀ students.฀ They฀ can฀ be฀ observed,฀ measured,฀ and฀ used฀ as฀ indicators฀ of฀ goals฀(Martell฀&฀Calderon,฀2005).

Under฀ the฀ Association฀ to฀ Advance฀ Collegiate฀ Schools฀ of฀ Business฀ Inter-national’s฀ (AACSB’s)฀ new฀ standards฀฀ (Betters-Reed,฀ Chacko,฀ &฀ Marlina,฀ 2003)฀ and฀ the฀ Southern฀Association฀ of฀ Colleges฀ and฀ Schools’฀ (SACS’s)฀ new฀ standards฀ (Commission฀ on฀ Colleges,฀ 2006),฀ business฀ programs฀ will฀ have฀ to฀ set฀ goals฀ to฀ address฀ what฀ skills,฀ attri-butes,฀ and฀ knowledge฀ they฀ want฀ their฀ students฀to฀master฀and฀must฀then฀be฀able฀ to฀demonstrate฀that฀their฀graduates฀have฀ met฀these฀goals.฀Establishing฀and฀imple-menting฀ a฀ system฀ under฀ which฀ these฀ programs฀can฀prove฀that฀their฀graduates฀ have฀met฀the฀established฀goals฀is฀neces-sary฀ under฀ these฀ standards.฀ Any฀ such฀ system฀will฀have฀to฀rely฀on฀the฀creation฀ and฀ measurement฀ of฀ course฀ objectives฀ to฀ serve฀ as฀ indicators฀ that฀ goals฀ are฀ being฀met.

Two฀ basic฀ approaches฀ to฀ assess฀ learning฀ are฀ indirect฀ and฀ direct.฀ Indi-rect฀ approaches฀ gather฀ opinions฀ of฀ the฀ quality฀ and฀ quantity฀ of฀ learning฀ that฀ takes฀ place฀ (Martell฀ &฀ Calderon,฀ 2005).฀ Techniques฀ for฀ gathering฀ data฀ by฀ using฀ indirect฀ assessment฀ include฀ focus฀ groups,฀ exit฀ interviews,฀ and฀฀

Assessing฀Learning฀Outcomes฀in฀

Quantitative฀Courses:฀Using฀Embedded฀

Questions฀for฀Direct฀Assessment

BARBARA฀A.฀PRICE CINDY฀H.฀RANDALL

GEORGIA฀SOUTHERN฀UNIVERSITY STATESBORO,฀GEORGIA

A

ABSTRACT. Researchers฀can฀evaluate฀ learning฀by฀using฀direct฀and฀indirect฀assess-ment.฀Although฀there฀are฀various฀ways฀ to฀apply฀these฀approaches,฀two฀common฀ techniques฀are฀pretests฀and฀posttests฀(direct฀ assessment),฀in฀which฀students฀demonstrate฀ mastery฀of฀topics฀or฀skills,฀and฀the฀use฀of฀ knowledge฀surveys฀(indirect฀assessment).฀ The฀present฀authors฀used฀these฀two฀tech- niques฀to฀demonstrate฀that฀student฀knowl- edge฀of฀course฀material฀increased฀signifi-cantly฀during฀the฀semester.฀฀Furthermore,฀ the฀authors฀demonstrated฀that฀the฀indirect฀ knowledge฀survey฀of฀perceived฀knowledge฀ did฀not฀correlate฀with฀actual฀knowledge.

Keywords:฀assessment,฀learning฀outcomes,฀ quantitative฀classes

Copyright฀©฀2008฀Heldref฀Publications

(3)

surveys.฀ One฀ common฀ type฀ of฀ survey฀ is฀ the฀ knowledge฀ survey฀ (Nuhfer฀ &฀ Knipp,฀2003).฀Knowledge฀surveys฀can฀ cover฀ the฀ topics฀ of฀ an฀ entire฀ course— both฀ skills฀ and฀ content฀ knowledge— exhaustively.฀ This฀ coverage฀ is฀ accom-plished฀ through฀ the฀ use฀ of฀ a฀ rating฀ system฀in฀which฀students฀express฀their฀ confidence฀ in฀ providing฀ answers฀ to฀ problems฀or฀issues฀(Horan,฀2004).

Using฀ a฀ knowledge฀ survey,฀ the฀ stu-dent฀responded฀to฀one฀of฀three฀choices:฀ (a)฀ “You฀ feel฀ confident฀ that฀ you฀ can฀ now฀ answer฀ the฀ question฀ sufficiently฀ for฀graded฀test฀purposes”;฀(b)฀“You฀can฀ now฀answer฀at฀least฀50%฀of฀the฀question฀ or฀ you฀ know฀ precisely฀ where฀ you฀ can฀ quickly฀ get฀ the฀ information฀ and฀ return฀ (20฀ minutes฀ or฀ less)฀ to฀ provide฀ a฀ com-plete฀answer฀for฀graded฀purposes”;฀or฀(c)฀ “You฀ are฀ not฀ confident฀ you฀ could฀ ade-quately฀ answer฀ the฀ question฀ for฀ graded฀ test฀purposes฀at฀this฀time”฀(Horan,฀2004).฀ This฀ method฀ of฀ assessment฀ allows฀ stu-dents฀to฀consider฀complex฀problems฀and฀ issues฀ as฀ well฀ as฀ course฀ content฀ knowl-edge฀(Nuhfer฀&฀Knipp,฀2003).฀

In฀contrast,฀direct฀assessment฀requires฀ that฀ students฀ demonstrate฀ mastery฀ of฀ topics฀ or฀ skills฀ by฀ using฀ actual฀ work฀ completed฀by฀the฀students.฀This฀require-ment฀ can฀ be฀ accomplished฀ by฀ using฀ papers,฀ presentations,฀ speeches,฀ graded฀ assessment฀items,฀or฀pretests฀and฀post-tests.฀Pretests฀and฀posttests฀are฀probably฀ the฀ most฀ widely฀ used฀ form฀ of฀ evaluat-ing฀ how฀ students฀ have฀ progressed฀ dur-ing฀the฀semester฀(Outcome฀Assessment,฀ 2003).฀This฀method฀surveys฀students฀at฀ the฀beginning฀and฀end฀of฀a฀course.฀With฀ standard฀ pretests฀ and฀ posttests,฀ stu-dents฀can฀complete฀the฀same฀quiz฀at฀the฀ beginning฀and฀end฀of฀the฀course,฀and฀a฀ grade฀can฀be฀computed฀to฀illustrate฀how฀ much฀ students฀ learned.฀ Critics฀ believe฀ this฀ approach฀ is฀ limiting฀ because฀ time฀ alone฀dictates฀the฀amount฀of฀material฀on฀ which฀students฀can฀be฀tested฀(Nuhfer฀&฀ Knipp,฀2003).฀Proponents฀feel฀that฀these฀ tests฀ are฀ specifically฀ designed฀ to฀ coin-cide฀ with฀ the฀ curriculum฀ of฀ the฀ course฀ and฀ can฀ focus฀ on฀ the฀ missions,฀ goals,฀ and฀objectives฀of฀the฀department฀or฀uni-versity฀(Outcome฀Assessment,฀2003).฀

Regardless฀ of฀ which฀ of฀ the฀ direct฀ methods฀ is฀ used,฀ educators฀ can฀ mea-sure฀ the฀ progress฀ of฀ students฀ by฀ using฀ course-embedded฀ assessment.฀

Course-embedded฀ assessment,฀ a฀ cutting-edge฀ formalized฀ assessment฀ (Gerretson฀ &฀ Golson,฀ 2005),฀ requires฀ that฀ the฀ prod-ucts฀ of฀ students’฀ work฀ be฀ evaluated฀ by฀ using฀ those฀ criteria฀ and฀ standards฀ established฀ in฀ the฀ course฀ objectives.฀ It฀ tends฀ to฀ be฀ informal฀ but฀ well฀ orga-nized฀(Treagust,฀Jacobowitz,฀Gallagher,฀ &฀ Parker,฀ 2003).฀ By฀ embedding,฀ the฀ opportunities฀ to฀ assess฀ progress฀ made฀ by฀ students฀ are฀ integrated฀ into฀ regular฀ instructional฀ material฀ and฀ are฀ indistin-guishable฀ from฀ day-to-day฀ classroom฀ activities฀(Keenan-Takagi,฀2000;฀Wilson฀ &฀ Sloane,฀ 2000).฀ The฀ results฀ are฀ then฀ shared฀ with฀ the฀ faculty฀ so฀ that฀ learn-ing฀ and฀ curriculum฀ can฀ be฀ improved.฀ This฀technique฀is฀efficient฀and฀insightful฀ (Martell฀ &฀ Calderon,฀ 2005)฀ and฀ guar- antees฀consistency฀within฀multiple฀sec-tions฀ of฀ the฀ same฀ course฀ by฀ using฀ the฀ same฀ outcomes฀ and฀ rubrics฀ (Gerretson฀ &฀Golson,฀2005).

Hypotheses

The฀ goal฀ of฀ the฀ present฀ study฀ was฀ to฀ provide฀ insight฀ on฀ the฀ use฀ of฀ direct฀ versus฀ indirect฀ techniques฀ as฀ means฀ of฀ assessing฀ student฀ learning,฀ with฀ the฀ hope฀ that฀ these฀ findings฀ can฀ be฀ used฀ as฀ input฀ to฀ course฀ improvement฀ as฀ well฀ as฀ assessment฀ and฀ accreditation฀ self-studies.฀ To฀ accomplish฀ this฀ goal,฀ we฀ asked฀ students฀ at฀ a฀ university฀ who฀ were฀ enrolled฀ in฀ Management฀ 6330฀ during฀ the฀ 2004–2005฀ academic฀ year฀ to฀ participate฀ in฀ a฀ knowledge฀ survey฀ project฀ including฀ a฀ pretest฀ and฀ post-test฀ validity฀ check.฀ Management฀ 6330,฀ or฀ Quantitative฀ Methods฀ for฀ Business,฀ is฀ an฀ introductory฀ course฀ in฀ statistics฀ and฀ management฀ science฀ techniques฀ required฀for฀students฀entering฀the฀MBA฀ or฀ MAcc฀ degree฀ programs฀ who฀ have฀ either฀not฀acquired฀the฀knowledge฀from฀ a฀ BA฀ degree฀ program฀ or฀ have฀ paused฀ for฀ some฀ time฀ since฀ taking฀ decision฀ analysis฀ courses.฀ Using฀ these฀ students’฀ scores,฀ we฀ compared฀ pretest฀ and฀ post-test฀scores฀and฀knowledge฀survey฀scores฀ on฀ a฀ question-by-question฀ basis.฀Addi- tionally,฀pretest฀and฀posttest฀and฀before-and-after฀knowledge฀survey฀scores฀were฀ compared.฀ Last,฀ the฀ class฀ averages฀ on฀ both฀instruments฀were฀compared฀for฀the฀ data฀gathered฀at฀the฀beginning฀and฀then฀ at฀the฀end฀of฀the฀semester.

We฀studied฀the฀following฀hypotheses:฀ 1.฀At฀ the฀ beginning฀ of฀ a฀ course,฀ stu-dents’฀ knowledge฀ and฀ actual฀ knowl-edge฀are฀mutually฀independent.

2.฀At฀the฀end฀of฀a฀course,฀students’฀per-ceived฀ knowledge฀ and฀ actual฀ knowl-edge฀are฀related.

3.฀Students’฀ perceived฀ knowledge฀ is฀ significantly฀ greater฀ at฀ the฀ end฀ of฀ a฀ course฀ than฀ at฀ the฀ beginning฀ of฀ a฀ course.฀

4.฀Students’฀actual฀knowledge฀is฀signifi-cantly฀greater฀at฀the฀end฀of฀a฀course฀ than฀at฀the฀beginning฀of฀a฀course.฀ 5.฀Average฀ perceived฀ knowledge฀ for฀

students฀is฀significantly฀greater฀at฀the฀ end฀of฀a฀course฀than฀at฀the฀beginning฀ of฀a฀course.

6.฀Average฀ actual฀ knowledge฀ for฀ stu-dents฀ is฀ significantly฀ greater฀ at฀ the฀ end฀of฀a฀course฀than฀at฀the฀beginning฀ of฀a฀course.฀

METHOD

During฀ the฀ 2004–2005฀ academic฀ year,฀Dr.฀David฀W.฀Robinson฀conducted฀ a฀knowledge฀survey฀trial฀at฀a฀university฀ in฀ the฀ southeastern฀ United฀ States฀ and฀ invited฀ all฀ faculty฀ members฀ to฀ partici-pate.฀Those฀who฀chose฀to฀do฀so฀created฀ a฀list฀of฀questions฀that฀comprehensively฀ expressed฀ the฀ content฀ of฀ their฀ classes.฀ Then,฀Robinson฀(2004)฀used฀these฀ques-tions฀ to฀ construct฀ a฀ knowledge฀ survey฀ instrument.฀ One฀ class฀ whose฀ professor฀ chose฀ to฀ participate฀ in฀ the฀ trial฀ was฀ Management฀ 6330,฀ Quantitative฀ Meth-ods฀ for฀ Business.฀ This฀ class฀ is฀ taught฀ every฀semester.฀

During฀the฀fall฀2004฀and฀spring฀2005฀ semesters,฀ students฀ enrolled฀ in฀ Man-agement฀ 6330฀ were฀ participants฀ in฀ the฀ knowledge฀ survey฀ project.฀ As฀ a฀ par-ticipant฀ in฀ this฀ project,฀ each฀ student฀ completed฀ a฀ Web-based฀ survey฀ during฀ the฀ first฀ class.฀ The฀ survey฀ asked฀ each฀ student฀to฀indicate฀confidence฀in฀being฀ able฀ to฀ answer฀ questions฀ on฀ material฀ that฀ would฀ be฀ covered฀ over฀ the฀ course฀ of฀the฀semester.฀At฀the฀end฀of฀the฀semes-ter,฀ each฀ student฀ completed฀ the฀ same฀ survey,฀providing฀a฀means฀to฀assess฀the฀ learning฀ that฀ occurred฀ over฀ the฀ semes-ter.฀ These฀ surveys฀ were฀ administered฀ via฀ the฀ Web฀ and฀ did฀ not฀ count฀ in฀ the฀ student’s฀ course฀ average.฀ The฀ faculty฀ member฀teaching฀the฀class฀did฀not฀have฀

(4)

access฀ to฀ the฀ survey฀ results฀ until฀ after฀ the฀semester฀ended.

One฀ problem฀ with฀ surveys฀ in฀ which฀ students฀are฀asked฀if฀they฀have฀adequate฀ knowledge฀ without฀ having฀ to฀ prove฀ knowledge฀is฀that฀some฀students฀exhib-it฀ overconfidence฀ (Nuhfer฀ &฀ Knipp,฀ 2003).฀To฀ overcome฀ this฀ problem,฀ dur-ing฀ the฀ second฀ night฀ of฀ class฀ each฀ stu- dent฀received฀the฀same฀pretest฀and฀actu-ally฀ solved฀ the฀ test฀ problems.฀ Another฀ problem฀ often฀ encountered฀ is฀ that฀ stu-dents฀ fail฀ to฀ take฀ the฀ test฀ seriously฀ if฀ no฀ incentive฀ is฀ attached฀ (THEC฀ Per-formance฀Funding,฀2003).฀In฀fall฀2004,฀ this฀activity฀did฀not฀count฀as฀part฀of฀the฀ student’s฀ final฀ grade;฀ however,฀ with฀ an฀ overall฀score฀of฀70%฀or฀higher,฀the฀stu-dent฀could฀elect฀to฀exempt฀Management฀ 6330.฀ If฀ the฀ student฀ remained฀ in฀ the฀ course,฀this฀same฀test฀was฀administered฀ at฀the฀end฀of฀the฀fall฀semester.฀The฀score฀ on฀this฀exam฀accounted฀for฀10%฀of฀the฀ student’s฀final฀class฀average.

In฀ spring฀ 2005,฀ Management฀ 6330฀ students฀again฀chose฀to฀participate฀in฀the฀ assessment฀ study.฀After฀ the฀ initial฀ trial฀ during฀the฀prior฀semester,฀the฀professor฀ refined฀both฀the฀survey฀and฀the฀process.฀ One฀ change฀ involved฀ proof฀ of฀ compe-tency฀ for฀ Management฀ 6330.฀ Instead฀ of฀ exempting฀ the฀ course฀ with฀ an฀ over-all฀ passing฀ grade฀ (70฀ or฀ above)฀ on฀ the฀ pretest,฀ the฀ students฀ had฀ to฀ score฀ a฀ 70฀ or฀higher฀in฀each฀of฀the฀six฀competency฀ areas฀ (descriptive/graphical฀ analysis,฀ probability,฀ inference,฀ decision฀ analy-sis,฀ linear฀ programming,฀ and฀ quality฀ control฀ processes).฀ The฀ second฀ change฀ involved฀ the฀ posttest.฀ Students฀ in฀ the฀ fall฀ semester฀ complained฀ about฀ the฀ number฀of฀tests฀facing฀them฀at฀the฀end฀ of฀ the฀ course.฀ In฀ the฀ spring,฀ instead฀ of฀ giving฀ a฀ separate฀ posttest฀ that฀ counted฀ as฀part฀of฀the฀final฀exam,฀the฀professor฀ embedded฀a฀random฀selection฀of฀pretest฀ questions฀ from฀ each฀ of฀ the฀ six฀ compe-tency฀ areas฀ into฀ the฀ final฀ exam.฀ These฀ questions,฀which฀accounted฀for฀roughly฀ half฀ of฀ the฀ original฀ pretest฀ questions,฀ were฀ compared฀ with฀ the฀ pretest฀ score฀ for฀assessment.฀

RESULTS

Assessment฀of฀students฀in฀Manage-ment฀6330฀began฀on฀the฀first฀night฀of฀ class.฀Although฀ at฀ the฀ end฀ of฀ the฀ fall฀

semester฀ class฀ enrollment฀ showed฀ a฀ total฀ of฀ 29฀ students,฀ some฀ enrolled฀ late.฀ Therefore,฀ only฀ 23฀ completed฀ both฀ the฀ pretest฀ and฀ posttest฀ knowl-edge฀ survey฀ instrument.฀Again฀ in฀ the฀ spring,฀ students฀ enrolled฀ late,฀ and฀ some฀ did฀ not฀ complete฀ the฀ pretest฀ knowledge฀ survey฀ instrument.฀ Of฀ the฀ 25฀ students฀ who฀ finished฀ the฀ course,฀ only฀ 17฀ completed฀ both฀ the฀ pretest฀ and฀posttest฀knowledge฀survey฀instru-ments.฀Therefore,฀in฀the฀fall฀and฀spring฀ semesters,฀40฀students฀completed฀both฀ pretest฀and฀posttest฀knowledge฀survey฀ instruments.฀ A฀ total฀ of฀ 54฀ students฀ completed฀ the฀ pretest฀ and฀ posttest฀ by฀ solving฀problems.

Because฀ we฀ recorded฀ student฀ assess-ment฀ of฀ perceived฀ knowledge฀ by฀ using฀ ordinal฀ data฀ and฀ per-question฀ actual฀ knowledge฀ by฀ using฀ binary฀ data฀ (0฀ =฀ incorrect,฀ 1฀ =฀correct),฀ nonparametric฀ methods฀ for฀ statistical฀ procedures฀ were฀ used฀ to฀ test฀ five฀ of฀ the฀ six฀ hypotheses.฀ Hypothesis฀1฀was฀addressed฀by฀using฀rank฀ correlations฀in฀which฀Spearman’s฀rho฀was฀ calculated฀to฀test฀significance.฀The฀authors฀ tested฀the฀following฀hypotheses:

H0:฀ At฀ the฀ beginning฀ of฀ the฀ semester,฀ a฀ positive฀ or฀ negative฀ relationship฀ between฀ the฀ measures฀ of฀ students’฀ perceived฀ knowledge฀ and฀ actual฀ knowledge฀exists.

H1:฀ At฀ the฀ beginning฀ of฀ the฀ semester,฀ the฀ measures฀ of฀ students’฀ perceived฀ knowledge฀and฀actual฀knowledge฀are฀ mutually฀independent.

Twenty-one฀ of฀ the฀ 23฀ students฀ who฀ completed฀ the฀ pretest฀ assessments฀ for฀ perceived฀ and฀ actual฀ knowledge฀ at฀ the฀ beginning฀ of฀ fall฀ semester฀ and฀ 13฀ of฀ the฀ 17฀ who฀ completed฀ the฀ pretest฀ assessments฀ for฀ perceived฀ and฀ actual฀ knowledge฀ in฀ the฀ spring฀ semester฀ pro-duced฀ results฀ showing฀ no฀ significant฀ relationship฀between฀the฀two฀measures.฀ Two฀ students฀ in฀ the฀ fall฀ and฀ 4฀ in฀ the฀ spring฀ revealed฀ a฀ significant฀ relation-ship฀ between฀ what฀ they฀ believed฀ they฀ knew฀ and฀ what฀ they฀ actually฀ knew,฀ 3฀ at฀ the฀ .05฀ level฀ of฀ significance฀ and฀ the฀ others฀ at฀ the฀ .10฀ level฀ of฀ significance฀ (see฀Table฀1).฀

The฀results฀indicated฀that฀at฀the฀begin-ning฀of฀the฀semester฀most฀students฀could฀ not฀ accurately฀ assess฀ their฀ levels฀ of฀ existing฀knowledge.฀Of฀those฀assessed,฀

85%฀ showed฀ no฀ significant฀ relation-ship฀between฀their฀perceived฀knowledge฀ and฀ actual฀ knowledge฀ of฀ the฀ subject.฀ In฀other฀words,฀at฀the฀beginning฀of฀the฀ semester,฀ the฀ students฀ were฀ unable฀ to฀ determine฀ the฀ difference฀ between฀ per-ceived฀ knowledge฀ and฀ actual฀ knowl-edge.฀Therefore,฀H0฀cannot฀be฀rejected.฀ Hypothesis฀1฀is฀supported.

We฀ also฀ addressed฀ Hypothesis฀ 2฀ by฀ using฀ rank฀ correlations฀ in฀ which฀ Spearman’s฀ rho฀ was฀ calculated฀ to฀ test฀ significance.฀ We฀ tested฀ the฀ following฀ hypotheses:

H0 :฀At฀the฀end฀of฀the฀semester,฀the฀mea-sures฀ of฀ students’฀ perceived฀ knowl-edge฀ and฀ of฀ their฀ actual฀ knowlknowl-edge฀ are฀mutually฀independent.

H2 :฀At฀the฀end฀of฀the฀semester,฀a฀posi-tive฀or฀negative฀relationship฀between฀ the฀ measures฀ of฀ students’฀ perceived฀ knowledge฀and฀of฀their฀actual฀knowl-edge฀exists.

Seventeen฀ of฀ the฀ 23฀ students฀ who฀ completed฀ the฀ posttest฀ assessments฀ for฀ perceived฀ knowledge฀ and฀ actual฀ knowledge฀ during฀ the฀ fall฀ and฀ 12฀ of฀ the฀ 17฀ students฀ who฀ completed฀ the฀ posttest฀ assessments฀ for฀ perceived฀ knowledge฀ and฀ actual฀ knowledge฀ in฀ the฀ spring฀ produced฀ test฀ results฀ show-ing฀no฀significant฀relationship฀between฀ the฀ two฀ measures.฀ Only฀ 6฀ students฀ in฀ the฀ fall฀ and฀ 5฀ in฀ the฀ spring฀ revealed฀ a฀ significant฀ relationship฀ between฀ what฀ they฀believed฀they฀knew฀and฀what฀they฀ actually฀ did฀ know,฀ 5฀ at฀ the฀ .01฀ level฀ of฀ significance,฀ 4฀ at฀ the฀ .05฀ level฀ of฀ significance,฀ and฀ 2฀ at฀ the฀ .10฀ level฀ of฀ significance฀(see฀Table฀2).฀

At฀the฀end฀of฀both฀semesters,฀most฀stu- dents฀were฀not฀accurate฀in฀their฀assess-ment฀of฀acquired฀knowledge.฀Although฀ a฀ slight฀ improvement฀ occurred,฀ by฀ the฀ end฀of฀the฀semester฀most฀students฀were฀ still฀unable฀to฀determine฀the฀difference฀ between฀perceived฀knowledge฀and฀actu-al฀ knowledge.฀ Just฀ over฀ 72%฀ of฀ those฀ assessed฀ after฀ they฀ had฀ completed฀ the฀ course฀ showed฀ no฀ significant฀ relation-ship฀between฀perceived฀knowledge฀and฀ actual฀knowledge฀of฀the฀subject.฀There-fore,฀H0 ฀cannot฀be฀rejected฀and฀Hypoth-esis฀2฀is฀not฀supported.

Hypothesis฀ 3฀ compared฀ perceived฀ knowledge฀ at฀ the฀ beginning฀ of฀ the฀ semester฀to฀perceived฀knowledge฀at฀the฀

(5)

end฀of฀the฀semester.฀Because฀data฀from฀ the฀ knowledge฀ survey฀ were฀ ordinal,฀฀ with฀ students’฀ responding฀ to฀ one฀ of฀ three฀ choices,฀ sign฀ tests฀ were฀ used฀ to฀ test฀ the฀ differences฀ between฀ the฀ pretest฀ assessment฀ and฀ the฀ posttest฀ assessment.฀ We฀ tested฀ the฀ following฀ hypotheses:

H0 :฀At฀the฀end฀of฀the฀semester,฀the฀stu-dents’฀ perceived฀ knowledge฀ are฀ not฀ greater฀than฀at฀the฀beginning.

H3:฀At฀the฀end฀of฀the฀semester,฀students’฀ perceived฀ knowledge฀ is฀ significantly฀ greater฀than฀at฀the฀beginning.฀ We฀ compared฀ assessment฀ results฀ for฀ 40฀ (23฀ fall฀ and฀ 17฀ spring)฀ stu-dents.฀In฀all฀cases,฀students’฀perceived฀ knowledge฀at฀the฀end฀of฀the฀semester฀ was฀ significantly฀ greater฀ at฀ the฀ .01฀ level฀ of฀ significance฀ than฀ their฀ per-ceived฀knowledge฀at฀the฀beginning฀of฀ the฀semester฀(see฀Figure฀1).฀Analyses฀ failed฀ to฀ support฀ the฀ null฀ hypothesis฀ (H0).฀ Therefore,฀ Hypothesis฀ 3฀ was฀ supported.

Hypothesis฀4฀theorizes฀that฀students’฀ actual฀ knowledge฀ at฀ the฀ end฀ of฀ the฀ semester฀is฀significantly฀greater฀than฀at฀ the฀beginning.฀Sign฀tests฀were฀used฀for฀

this฀analysis.฀The฀following฀hypotheses฀ were฀tested:

H0:฀ The฀ difference฀ between฀ actual฀ knowledge฀at฀the฀end฀of฀the฀semester฀ and฀ actual฀ knowledge฀ at฀ the฀ begin-ning฀is฀not฀significant.

H4:฀At฀the฀end฀of฀the฀semester,฀students’฀ actual฀ of฀ knowledge฀ is฀ significantly฀ greater฀than฀at฀the฀beginning.

Because฀ this฀ pretest฀ assessment฀ was฀ administered฀ on฀ the฀ second฀ night฀ of฀ class฀and฀all฀members฀of฀the฀class฀were฀ present,฀a฀total฀of฀29฀students฀in฀the฀fall฀ and฀ 25฀ in฀ the฀ spring฀ took฀ this฀ pretest฀ assessment.฀Of฀the฀54฀students฀assessed,฀ 44฀demonstrated฀that฀their฀actual฀knowl-edge฀ improved฀ significantly฀ over฀ the฀ course฀of฀the฀semester฀(see฀Table฀3).฀

More฀ than฀ three฀ fourths฀ of฀ those฀ assessed฀ (81.48%)฀ gained฀ a฀ significant฀ amount฀ of฀ knowledge฀ of฀ the฀ subject฀ over฀the฀course฀of฀the฀semester฀(see฀Fig-ure฀2).฀On฀the฀basis฀of฀these฀test฀results,฀ we฀ rejected฀ the฀ null฀ hypothesis฀ (H0).฀ Hypothesis฀4฀was฀supported.฀

TABLE฀1.฀Relationship฀of฀Perceived฀and฀Actual฀Knowledge฀at฀the฀Beginning฀ of฀the฀Semester฀

Semester฀ Spearman’s฀rho฀ p

Spring฀2005฀ .190฀ .021

Fall฀2004฀ .341฀ .034

Spring฀2005฀ .196฀ .037

Spring฀2005฀ .234฀ .081

Spring฀2005฀ .258฀ .091

Fall฀2004฀ .268฀ .099

TABLE฀2.฀Relationship฀of฀Perceived฀and฀Actual฀Knowledge฀at฀the฀End฀฀ of฀the฀Semester฀

฀ Fall฀2004฀Students฀ Spring฀2005฀Students

฀ Spearman’s฀rho฀ p฀ Spearman’s฀rho฀ p

฀ .478฀ .002฀ .404฀ .010

฀ .465฀ .003฀ .378฀ .016

฀ .456฀ .004฀ .342฀ .031

฀ .456฀ .004฀ .329฀ .038

฀ .378฀ .018฀ .283฀ .077

฀ .305฀ .059

FIGURE฀1.฀Perceived฀knowledge฀at฀the฀beginning฀and฀end฀of฀the฀fall฀ semester.฀KSA฀=฀posttest฀for฀knowledge฀survey;฀KSB฀=฀pretest฀for฀ knowledge฀survey;฀Q฀=฀question฀number.

฀ KSB฀1฀ KSA฀1

3.5

3.0

2.5

2.0

1.5

1.0

0.5

0

Confidence฀Inde

x

Q1 Q3 Q5 Q7 Q9 Q11 Q13 Q15 Q17 Q19 Q21 Q23 Q27 Q29 Q31 Q33 Q35 Q39 Q42 Q44

(6)

Hypothesis฀ 5฀ examined฀ the฀ differ- ence฀between฀the฀average฀scores฀of฀pre-tests฀ and฀ those฀ of฀ postence฀between฀the฀average฀scores฀of฀pre-tests฀ regarding฀ perceived฀ knowledge.฀This฀ comparison฀ was฀ made฀ using฀ the฀ Wilcoxon฀ signed฀ ranks฀test฀(Conover,฀1971).฀The฀follow-ing฀hypotheses฀were฀tested:

H0:฀ On฀ the฀ average,฀ perceived฀ knowl-edge฀ does฀ not฀ appear฀ to฀ be฀ greater฀ at฀ the฀ end฀ of฀ the฀ semester฀ than฀ per-ceived฀knowledge฀at฀the฀beginning฀of฀ the฀semester.

H5:฀ On฀ the฀ average,฀ perceived฀ knowl-edge฀ appears฀ to฀ be฀ significantly฀ greater฀at฀the฀end฀of฀the฀semester฀than฀ perceived฀knowledge฀at฀the฀beginning฀ of฀the฀semester.

For฀33฀of฀the฀40฀students฀who฀com-pleted฀ the฀ pretest฀ and฀ posttest฀ knowl-edge฀ surveys,฀ average฀ scores฀ on฀

per-ceived฀ knowledge฀ after฀ the฀ course฀ was฀ completed฀were฀ higher฀than฀ those฀ before฀ the฀ course฀ began.฀ Average฀ assessment฀ scores฀ of฀ 6฀ students฀ in฀ the฀ fall฀ class฀ were฀ the฀ same฀ in฀ the฀ pretest฀ and฀ posttest฀ results.฀ Only฀ 1฀ student฀ (fall฀semester)฀had฀a฀lower฀score฀at฀the฀ end฀ of฀ the฀ course฀ (see฀ Figure฀ 3).฀ The฀ Wilcoxon฀ signed฀ ranks฀ test฀ (Conover,฀ 1971)฀ indicted฀ that฀ the฀ difference฀ in฀ pretest฀and฀posttest฀average฀assessment฀ scores฀ regarding฀ perceived฀ knowledge฀ at฀ the฀ .01฀ level฀ of฀ significance฀ in฀ the฀ fall฀and฀at฀the฀.00฀level฀of฀significance฀ in฀the฀spring.

More฀than฀80%฀of฀the฀students฀dem-onstrated฀ a฀ significantly฀ greater฀ degree฀ of฀ perceived฀ knowledge฀ of฀ class฀ mate-rial฀at฀the฀end฀of฀the฀semester.฀This฀does฀ not฀ support฀ the฀ null฀ hypothesis฀ (H0).฀ Hypothesis฀5฀was฀supported.

Hypothesis฀ 6฀ questioned฀ the฀ differ-ence฀ in฀ the฀ average฀ actual฀ knowledge฀ gained฀ over฀ the฀ course฀ of฀ the฀ semes-ter.฀For฀this฀assessment,฀questions฀were฀ weighted฀on฀the฀basis฀of฀their฀difficulty,฀ and฀ results฀ were฀ at฀ the฀ ratio฀ level.฀ A฀ paired฀t฀ test฀ was฀ used.฀The฀ hypotheses฀ tested฀were฀the฀following:

H0:฀On฀average,฀actual฀knowledge฀does฀ not฀appear฀to฀be฀greater฀at฀the฀end฀of฀ the฀semester฀than฀actual฀knowledge฀at฀ the฀beginning฀of฀the฀semester. H6:฀On฀average,฀actual฀knowledge฀appears฀

to฀be฀significantly฀greater฀at฀the฀end฀of฀ the฀semester฀than฀actual฀knowledge฀at฀ the฀beginning฀of฀the฀semester.

We฀ tested฀ students฀ on฀ course฀ con-cepts฀ at฀ the฀ beginning฀ and฀ the฀ end฀ of฀ the฀semester.฀We฀compared฀the฀average฀ test฀scores฀and฀found฀that฀the฀difference฀ in฀the฀pretest฀and฀the฀posttest฀at฀the฀.01฀ level฀of฀significance฀in฀the฀fall฀and฀at฀the฀ .00฀ level฀ of฀ significance฀ in฀ the฀ spring.฀ On฀ average,฀ students฀ demonstrated฀ a฀ significant฀ gain฀ in฀ actual฀ knowledge฀ over฀ the฀ course฀ of฀ the฀ semester฀ (see฀ Figure฀4).

On฀the฀basis฀of฀the฀significant฀t฀test฀ results,฀we฀concluded฀that฀students฀did฀ perform฀significantly฀better฀at฀the฀end฀ of฀ the฀ semester.฀ Therefore,฀ the฀ null฀ hypothesis฀(H0 )฀was฀rejected.฀Hypoth-esis฀6฀was฀supported.

DISCUSSION

Colleges฀ and฀ universities฀ wishing฀ to฀ attain฀ and฀ maintain฀ accreditation,฀ demonstrate฀ compliance฀ with฀ state฀ and฀ federal฀ guidelines,฀ and฀ direct฀ curricu-lum฀rely฀on฀the฀assessment฀of฀students.฀ Assessment฀ is฀ one฀ means฀ of฀ exhibit-ing฀ that฀ learnexhibit-ing฀ is฀ takexhibit-ing฀ place฀ in฀ the฀ classroom.฀The฀assessments฀can฀be฀con-ducted฀ in฀ various฀ ways;฀ two฀ common฀ ways฀are฀through฀(a)฀the฀use฀of฀pretests฀ and฀posttests฀in฀which฀students฀demon-strate฀mastery฀of฀topics฀or฀skills฀and฀(b)฀ the฀ use฀ of฀ knowledge฀ surveys.฀ In฀ the฀ present฀study,฀we฀used฀both฀assessment฀ techniques฀ to฀ determine฀ whether฀ stu-dents฀were฀learning.฀

฀Assessment฀is฀a฀necessary฀tool฀with฀ which฀ schools฀ can฀ exhibit฀ compliance฀ with฀ accreditation,฀ state,฀ and฀ federal฀ guidelines.฀ It฀ is฀ not฀ easy฀ to฀ imple-ment,฀ and฀ it฀ is฀ time฀ consuming.฀ Once฀ TABLE฀3.฀Comparison฀of฀Actual฀Knowledge฀at฀the฀End฀and฀Beginning฀฀

of฀the฀Semester

฀ Number฀of฀students฀

฀ ฀ whose฀actual฀knowledge฀฀ %฀of฀total฀number฀of ฀ p฀ significantly฀increased฀ students฀evaluated

฀ .01฀ 27฀ 50.00

฀ .05฀ 9฀ 16.67

฀ .10฀ 8฀ 14.81

Not฀significant฀ 10฀ 18.52

FIGURE฀2.฀Actual฀knowledge฀at฀the฀beginning฀and฀end฀of฀the฀semester.฀ Pre฀=฀pretest฀for฀actual฀knowledge;฀Post฀=฀posttest฀for฀actual฀number;฀฀ Q฀=฀question฀number.

฀ Pre฀ Post

1.5

1.0

0.5

0

Correct/Incorrect

Q1 Q3 Q5 Q7 Q9 Q11 Q13 Q15 Q17 Q19 Q21 Q23 Q27 Q29 Q31 Q33 Q35 Q39 Q42 Q44

(7)

FIGURE฀3.฀Average฀perceived฀knowledge฀at฀the฀beginning฀and฀end฀of฀the฀ semester.฀Avg฀KA฀=฀average฀score฀for฀posttest,฀perceived฀knowledge;฀Avg฀ KB฀=฀average฀score฀for฀pretest,฀perceived฀knowledge;฀Q฀=฀question฀number.

฀ Avg฀KB฀ Avg฀KA

3.5

3.0

2.5

2.0

1.5

1.0

0.5

0

A

vg฀Confidence฀Inde

x

฀ 1฀ 2฀ 3฀ 4฀ 5฀ 6฀ 7฀ 8฀ 9฀ 10฀11฀12฀13฀14฀ 15฀16฀17฀18฀19฀20฀21฀22฀23฀24฀25฀26฀27฀28฀29

FIGURE฀4.฀Average฀actual฀knowledge฀at฀the฀beginning฀and฀end฀of฀the฀ semester.

Pretest฀actual฀knowledge 14

Test฀Score

฀≤฀0฀ 0–10฀ 10–20฀ 20–30฀ 30–40฀ 40–50฀ 50–60฀ 60–70฀ 70–80฀ 80–90฀90–100฀ >฀100

12

10

8

6 4

2

0

Number฀of฀Students

Postest฀actual฀knowledge 14

Test฀Score

฀≤฀0฀ 0–10฀ 10–20฀ 20–30฀ 30–40฀ 40–50฀ 50–60฀ 60–70฀ 70–80฀ 80–90฀90–100฀ >฀100

12

10

8

6

4

2

0

Number฀of฀Students

an฀ assessment฀ test฀ has฀ been฀ created,฀ it฀ must฀ be฀ evaluated฀ and฀ fine-tuned฀ each฀ semester;฀ however,฀ the฀ benefits฀ more฀than฀offset฀the฀time฀and฀effort฀that฀ assessment฀requires.

Posttest฀ assessment฀ can฀ be฀ used฀ to฀ revise฀ course฀ content฀ so฀ that฀ areas฀ in฀ which฀students฀are฀weak฀can฀be฀empha-sized.฀ Similarly,฀ pretest฀ results฀ can฀ identify฀ areas฀ in฀ which฀ students฀ have฀ prior฀knowledge,฀and฀teachers฀can฀dedi-cate฀ less฀ class฀ time฀ to฀ those฀ topics.฀ In฀ short,฀both฀the฀teacher฀and฀the฀students฀ can฀ benefit฀ from฀ assessment.฀ Faculty฀ should฀embrace฀assessment฀as฀a฀means฀ to฀ enhance฀ their฀ course฀ and฀ not฀ view฀ assessment฀as฀another฀hurdle฀in฀the฀road฀ to฀compliance.

To฀successfully฀use฀these฀techniques฀ for฀this฀study,฀we฀had฀to฀establish฀learn-ing฀ objectives฀ for฀ Management฀ 6330,฀ the฀course฀that฀we฀used฀for฀this฀research฀ project.฀ Questions฀ or฀ problems฀ had฀ to฀ be฀created฀to฀focus฀on฀course฀topics฀and฀ to฀ enable฀ students฀ to฀ demonstrate฀ that฀ these฀goals฀had฀been฀met.฀These฀activi-ties฀were฀time฀consuming.

Through฀ pretests฀ and฀ posttests,฀ we฀ assessed฀both฀perceived฀knowledge฀and฀ actual฀ knowledge฀ of฀ course฀ material.฀ These฀data฀were฀compared฀at฀the฀begin-ning฀ and฀ the฀ end฀ of฀ the฀ semester฀ and฀ were฀compared฀against฀each฀other.฀The฀ levels฀of฀perceived฀knowledge฀and฀actu-al฀knowledge฀climbed฀significantly฀both฀ in฀testing฀data฀student฀by฀student฀and฀by฀ examining฀the฀average฀amount฀learned.฀ Students฀ were฀ not฀ able฀ to฀ accurately฀ perceive฀their฀knowledge฀level.

Is฀ it฀ unusual฀ that฀ the฀ students฀ were฀ not฀ able฀ to฀ accurately฀ perceive฀ their฀ knowledge฀ level?฀ This฀ is฀ a฀ difficult,฀ if฀ not฀ impossible,฀ question฀ to฀ answer.฀฀ However,฀ Rogers฀ (2006)฀ noted,฀ “as฀ evidence฀ of฀ student฀ learning,฀ indirect฀ methods฀ are฀ not฀ as฀ strong฀ as฀ direct฀ measures฀ because฀ assumptions฀ must฀ be฀ made฀ about฀ what฀ exactly฀ the฀ self-฀ report฀means.”฀The฀results฀of฀our฀study฀ indicate฀ that฀ self-reporting฀ does฀ not฀ mean฀ much.฀ Rogers฀ goes฀ on฀ to฀ state฀ that฀ “it฀ is฀ important฀ to฀ remember฀ that฀ all฀assessment฀methods฀have฀their฀limi-tations฀ and฀ contain฀ some฀ bias.”฀ The฀ inability฀ of฀ the฀ students฀ to฀ identify฀ their฀ knowledge฀ level฀ implies฀ that฀ to฀ accurately฀ measure฀ learning,฀ direct฀ measures฀should฀be฀employed.

(8)

NOTES

Barbara฀A.฀Price ,฀PhD,฀is฀a฀professor฀of฀quan- titative฀analysis฀in฀the฀College฀of฀Business฀Admin-istration฀at฀Georgia฀Southern฀University.฀She฀has฀ more฀than฀50฀publications฀in฀various฀professional฀ journals฀ and฀ proceedings฀ including฀ the฀Decision฀ Sciences฀Journal฀of฀Innovative฀Education,฀Journal฀ of฀Education฀for฀Business,฀Inroads—the฀SIGCSE฀ Bulletin,฀ and฀Journal฀ of฀ Information฀ Technology฀ Education.

Cindy฀H.฀Randall฀is฀an฀assistant฀professor฀of฀ quantitative฀ analysis฀ in฀ the฀ College฀ of฀ Business฀ Administration฀ at฀ Georgia฀ Southern฀ University.฀ She฀ has฀ published฀ in฀ numerous฀ proceedings฀ as฀ well฀ as฀ in฀ the฀International฀ Journal฀ of฀ Research฀ in฀ Marketing,฀ Journal฀ of฀ Marketing฀ Theory฀ and฀ Practice,฀ Marketing฀ Management฀ Journal,฀ Jour-nal฀of฀Transportation฀Management,฀and฀Inroads— the฀SIGCSE฀Bulletin.

Correspondence฀ concerning฀ this฀ article฀ should฀ be฀ addressed฀ to฀ Cindy฀ H.฀ Randall,฀ Department฀ of฀ Finance฀ and฀ Quantitative฀ Analysis,฀ Georgia฀ Southern฀ University,฀ Box฀ 8151,฀ COBA,฀ States-boro,฀GA฀30460,฀USA.

E-mail:฀crandall@georgiasouthern.edu

REFERENCES

Abunawass,฀A.,฀Lloyd,฀W.,฀&฀Rudolf,฀E.฀(2004).฀ COMPASS:฀A฀CS฀program฀assessment฀project.฀ Proceedings,฀ITICSE,฀36(3),฀127–131.฀ Betters-Reed,฀ B.฀ L.,฀ Chacko,฀ J.,฀ M.,฀ &฀

Mar-lina,฀ D.฀ (2003).฀ Assurance฀ of฀ learning:฀ Small฀ school฀ strategies.฀Continuous฀ improvement฀ symposium,฀AACSB฀conferences฀and฀seminars.฀ Retrieved฀November฀3,฀2006,฀from฀http://www฀ .aacsb.edu/handouts/CIS03/cis03-prgm.asp Blaha,฀K.฀D.,฀&฀Murphy,฀L.฀C.฀(2001).฀Targeting฀

assessment:฀ How฀ to฀ hit฀ the฀ bull’s฀ eye.฀ Jour-nal฀ of฀ Computing฀ in฀ Small฀ Colleges,฀ 17(2),฀ 106–115.

Commission฀ on฀ Colleges.฀ (2006).฀Principles฀ of฀

accreditation:฀Foundation฀for฀quality฀enhance-ment฀ by฀ the฀ Southern฀ Association฀ of฀ Colleges฀ and฀ Schools฀ (2002–2006฀ edition).฀ Retrieved฀ November฀ 3,฀ 2006,฀ from฀ http://www.sacscoc฀ .org/pdf/PrinciplesOfAccreditation.PDF Conover,฀ W.฀ J.฀ (1971).฀Practical฀ nonparametric฀

statistics.฀New฀York:฀Wiley.

Earl,฀ L.,฀ &฀ Torrance,฀ N.฀ (2000).฀ Embedding฀ accountability฀and฀improvement฀into฀large-scale฀ assessment:฀What฀difference฀does฀it฀make?฀Pea-body฀Journal฀of฀Education,฀75(4),฀114–141. Eastman,฀ J.฀ K.,฀ Aller,฀ R.฀ C.,฀ &฀ Superville,฀ C.฀

L.฀ (2001).฀Developing฀ an฀ MBA฀ assessment฀ program:฀ Guidance฀ from฀ the฀ literature฀ and฀ one฀ program’s฀ experience.฀ Retrieved฀ Novem-ber฀ 10,฀ 2006,฀ from฀ http://www.westga.edu/ ~bquest/2001/assess.html

Gerretson,฀H.,฀&฀Golson,฀E.฀(2005).฀Synopsis฀of฀ the฀ use฀ of฀ course-embedded฀ assessment฀ in฀ a฀ medium฀ sized฀ public฀ university’s฀ general฀ edu-cation฀program.฀Journal฀of฀General฀Education,฀ 54(2),฀139–149.

Horan,฀ S.฀ (2004).฀Using฀ knowledge฀ surveys฀ to฀ direct฀ the฀ class.฀ Retrieved฀ November฀ 3,฀ 2006,฀ from฀ http://spacegrant.nmsu.edu/NMSU/2004/ horan.pdf.

Jones,฀ L.฀ G.,฀ &฀ Price,฀A.฀ L.฀ (2002).฀ Changes฀ in฀ computer฀ science฀ accreditation.฀ Communica-tions฀of฀the฀ACM,฀45(8),฀99–103.

Keenan-Takagi,฀ K.฀ (2000).฀ Embedding฀ assess- ment฀in฀choral฀teaching.฀Music฀Educators฀Jour-nal,฀86(4),฀42–49.

Lidtke,฀D.฀K.,฀&฀Yaverbaum,฀G.฀J.฀(2003).฀Devel-oping฀ accreditation฀ for฀ information฀ systems฀ education.฀IEEE,฀5(1),฀41–45.

Martell,฀ K.,฀ &฀ Calderon,฀ T.฀ (2005).฀Assessment฀ of฀ student฀ learning฀ in฀ business฀ schools:฀ Best฀ practice฀ each฀ step฀ of฀ the฀ way.฀Vol.฀ 1,฀ No.฀ 1.฀ Tallahassee,฀ FL:฀ Association฀ for฀ Institutional฀ Research.

Nuhfer,฀E.,฀&฀Knipp,฀D.฀ (2003).฀The฀ knowledge฀ survey:฀A฀ tool฀ for฀ all฀ reasons.฀To฀ Improve฀ the฀ Academy,฀21,฀59–78.

Outcome฀Assessment.฀ (2003).฀Office฀ of฀ the฀ Pro-vost฀ at฀ The฀ University฀ of฀Wisconsin–Madison.฀ Retrieved฀ November฀ 10,฀ 2006,฀ from฀ http:// www.provost.wisc.edu/assessment/manual/ manual12.html

Pare,฀ M.฀ A.฀ (Ed.).฀ (1998).฀Certification฀ and฀ accreditation฀ programs฀ directory:฀ A฀ descrip-tive฀ guide฀ to฀ national฀ voluntary฀ certification฀ and฀ accreditation฀ programs฀ for฀ professionals฀ and฀ institutions฀ (2nd฀ ed.).฀ Farmington฀ Hills,฀ MA:฀Gale฀Group.

Robinson,฀ D.฀ W.฀ (2004).฀The฀ Georgia฀ Southern฀ knowledge฀survey฀FAQ.฀Retrieved฀July฀1,฀2004,฀ from฀ http://ogeechee.litphil.georgiasouthern. edu/nuncio/faq.php

Rubino,฀ F.฀ J.฀ (2001).฀ Survey฀ highlights฀ impor-tance฀ of฀ accreditation฀ for฀ engineers.฀ASHRAE฀ Insight,฀16(7),฀27–31.฀

Rogers,฀ G.฀ (2006).฀ Assessment฀ 101:฀ direct฀ and฀ indirect฀ assessments:฀ what฀ are฀ they฀ good฀ for?฀฀ Retrieved฀May฀8,฀2008,฀from฀http://www.abet. org/Linked%20Documents-UPDATE/Newslet฀ ters/06-08-CM.pdf

Schwendau,฀ M.฀ (1995).฀ College฀ quality฀ assess-ment:฀ The฀ double-edged฀ sword.฀Tech฀ Direc-tions,฀54(9),฀30–32.

THEC฀Performance฀Funding.฀(2003).฀Pilot฀evalu-ation:฀Assessment฀of฀general฀education฀learning฀ outcomes฀ [Standard฀ I.B.฀ 2002-03].฀ Retrieved฀ July฀ 30,฀ 2004,฀ from฀ http://www.state.tn.us/ thec/2004web/division_pages/ppr_pages/pdfs/ Policy/Gen%20Ed%20RSCC%20Pilot.pdf Treagust,฀ D.฀ F.,฀ Jacobowitz,฀ R.,฀ Gallagher,฀ J.฀ J.,฀

&฀Parker,฀J.฀(2003).฀Embed฀assessment฀in฀your฀ teaching.฀Science฀Scope,฀26(6),฀36–39.

Valacich,฀J.฀(2001).฀Accreditation฀in฀the฀informa-tion฀academic฀discipline.฀Retrieved฀November฀ 5,฀ 2006,฀ from฀ http://www.aisnet.org/Curricu฀ lum/AIS_AcreditFinal.doc

Wilson,฀ M.,฀ &฀ Sloane,฀ K.฀ (2000).฀ From฀ prin-ciples฀ to฀ practice:฀ An฀ embedded฀ assessment฀ system.฀Applied฀ Measurement฀ in฀ Education,฀ 13(2),฀181–208.

Referensi

Dokumen terkait

Kepulauan Riau Kota Pangkal Pinang 41. SMK N l Tanjung Pinang Kepulauan Riau Kota Pangkal Pinang 42. Lampung Barat 43. Lampung Selatan 44. Lampung Tengah 45. Lamnunz Utara

Belajar menggunakan strategi pembelajaran Heuristik Vee adalah belajar mengkoneksikan masalah dengan menggunakan ide-ide atau konsep- konsep yang telah dimiliki oleh

S uatu padanan yang tidak terpisahkan yang mengandung pengertian luas tentang segala kegiatan yang terkait dengan pemrosesan, manipulasi, pengelolaan, dan

Rekanan tidak menyampaikan surat pernyataan maupun substansi surat pernyataan bahwa rekanan sanggup menunjukkan sertifikat keaslian bibit pada saat penerimaan barang.. Lihat

[r]

Hasil penelitian menunjukkan bahwa implementasi Jaminan Kesehatan Nasional di RSU Kota Tangerang Selatan belum maksimal dalam pelaksanaannya, terutama dalam

[r]

2) Bersedia mematuhi seluruh tata tertib seleksi calon karyawan PKWT PT INKA (Persero) 2017 dan memegang teguh asas bebas Korupsi, Kolusi