• Tidak ada hasil yang ditemukan

Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.80.4.231-234

N/A
N/A
Protected

Academic year: 2017

Membagikan "Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.80.4.231-234"

Copied!
5
0
0

Teks penuh

(1)

Full Terms & Conditions of access and use can be found at

http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

Using Course Management Systems to Enhance

the Value of Student Evaluations of Teaching

Richard L. Oliver & Elise Pookie Sautter

To cite this article: Richard L. Oliver & Elise Pookie Sautter (2005) Using Course Management Systems to Enhance the Value of Student Evaluations of Teaching, Journal of Education for Business, 80:4, 231-234, DOI: 10.3200/JOEB.80.4.231-234

To link to this article: http://dx.doi.org/10.3200/JOEB.80.4.231-234

Published online: 07 Aug 2010.

Submit your article to this journal

Article views: 27

View related articles

(2)

uality assessment tools are an important element in recogniz-ing, rewardrecogniz-ing, and encouraging contin-uous improvement and innovation in higher education. Many researchers have investigated the advantages and disadvantages of student evaluations of teaching (SETs) as a method of assess-ing teachassess-ing (Cashin, 1995; Centra, 1993; Feldman, 1989; Marsh, 1984; McKeachie, 1987). Much of the debate surrounding the validity of SETs focus-es on potential sourcfocus-es of bias that are often beyond the control of the instruc-tor. These sources of bias include such factors as (a) personal attributes of the student such as student gender and expected course grade, (b) situational characteristics of the learning environ-ment such as the elective status of the course and size of the class, and (c) genetic traits of the instructor such as gender and attractiveness. Mixed research results regarding these and other variables continue to stimulate debate about the ultimate validity of SETs as effective tools for teaching assessment.

Much of the debate regarding the value of SETs likely stems from the intended use of feedback from the instruments. A considerable amount of research explores the comparative worth of SETs for summative versus formative purposes (Cashin & Downey, 1992;

Hobson & Talbot, 2001). Summative uses of SETs attempt to provide sum-mary judgments about teaching perfor-mance that can be used to make deci-sions regarding promotion, tenure, and annual performance reviews. Alterna-tively, formative feedback emphasizes the collection of information useful for the development and improvement of teaching. Though researchers argue that SETs can provide a valuable source of formative feedback, many institutions design and implement SETs in a way that significantly weakens the formative value of the feedback. In this article, we explore the use of new learning tech-nologies for improving the formative value of SETs and the procedural effi-ciency and integrity of the process for summative purposes.

In recent years, the advent of online

learning technologies has increased inter-est in the administration of online teach-ing evaluations. Empirical researchers have explored the potential pros and cons of the online format for administration of student teaching evaluations. In general, the research findings indicate that stu-dents prefer online administration of SETs but that potential problems with guarantees of anonymity and response rates have limited faculty and/or institu-tional acceptance of these approaches (Dommeyer, Baum, & Hanna, 2002; Layne, DeCristoforo, & McGinty, 1999). Other researchers have reported on differ-ences between student responses to Web-based and paper versions of the same questionnaire (Layne et al.; Olsen, Wygant, & Brown, 1999; Tomsic, Hen-del, & Matross, 2000). Course manage-ment systems, such as WebCT, Black-board, and Ecollege offer real advances in improving the efficiency and overall value derived from the administration of student teaching evaluations online. In this article, we provide a case study detailing the use of the WebCT course management system for departmental administrations of the student teaching evaluation process. We present empirical evidence to demonstrate the comparative value of this approach in enhancing the formative value of SET feedback and offer suggestions to further enhance the SET collection process.

Using Course Management

Systems to Enhance the Value of

Student Evaluations of Teaching

RICHARD L. OLIVER ELISE POOKIE SAUTTER

New Mexico State University Las Cruces, New Mexico

Q

ABSTRACT. In this article, the authors propose a method of course management system (CMS) adminis-tration of student evaluations of teach-ing (SETs). The method provides a mechanism for providing greater guar-antee of anonymity to the student respondents. The authors report on a case study in which this guarantee was likely a significant factor contributing to the increase in response rates for online submissions. In addition, the results suggest that the method pro-vides significant benefits for improv-ing both the summative and formative value of SETs.

(3)

Relative Value of Online Administration of Teaching Evaluations

A tremendous body of literature has been published on the advantages and disadvantages of using SETs for exam-ining teaching effectiveness (Cashin, 1995; Centra, 1993; Feldman, 1989; Marsh, 1984). In general, these re-searchers concluded that student evalua-tions represent one important element of a more comprehensive and multi-method approach to the evaluation and improvement of teaching. Given the assumption that SETs remain a stable element of teaching assessment, re-searchers continue to examine how the design and administration of SETs can be affected to enhance the tool’s value to faculty members. Most recently, research regarding the comparative value of online versus in-class adminis-tration of SETs has revealed some inter-esting results.

In general, recent research findings suggest that online ratings offer signifi-cant advantages in terms of efficiency (i.e., less waste of resources such as paper, class time, processing time, and costs), and students express higher levels of satisfaction when the evaluation process is conducted online. In addition, preliminary investigations have indicated that more students are willing to give comments to open-ended questions on the survey instruments when they are administered online (Layne et al., 1999). The possibility of increased qualitative response deserves particular attention because faculty members perceive greater value in students’ written responses to open-ended questions than in categorized responses to closed-ended questions (Ory & Braskamp, 1981; Tiberius, Sackin, & Cappe, 1987).

Online administration of SETs is not without problems. In particular, the findings of most studies have indicated that response rates differ significantly according to the method of administra-tion. Baum, Chapman, Sommeyer, and Hanna (2001) found that response rates ranged from 32.8% for online responses to 76.8% for in-class ones, and Layne et al. (1999) found that they ranged from 47.8% for online responses to 60.6% for in-class ones. In both cases, the

researchers have suggested that greater institutional endorsement of the online collection process and more convincing guarantees of student anonymity could be important remedies in correcting response-rate discrepancies.

Using Course Management Systems to Improve Online SET Administration

Adoption rates for course manage-ment systems (CMS) have increased dramatically on higher education cam-puses. According to the Campus Com-puting Project, more than one fifth of all college courses are now taught using course management systems (Green, 2001). WebCT, an early service provider in the field, has reported on its commer-cial Web site (2003) that “(t)housands of institutions in over 80 countries are licensed to use WebCT (p. 1).” Given the popularity of WebCT and other similar CMSs, instructors can realize significant synergies by learning how they can use such systems to improve online adminis-tration of SETs.

Instructors can easily use the survey and quiz tool included in most CMSs to construct and administer student surveys within their own courses. Indeed, instructors have reported that this is a useful mechanism for administering midsemester surveys for formative feed-back (Austin & Austin, 2002). Unfortu-nately, survey administration within a course cannot absolutely guarantee anonymity of student respondents. As Austin and Austin (2002) noted, “(t)he instructor can see whotook the survey, but not what the person said unless there is only one person who has answered the survey.” Even though the possibility of identification is small, the importance of anonymity guarantees for online admin-istration makes it imperative that one use the system in such a way that will guar-antee student anonymity.

Departmental or college-wide admin-istration of SETs can be constructed within the CMS system so as to over-come many of the limitations previously noted about online SET administration. The approach involves creation of a CMS “course” that is used solely for collection of SETs. This approach, which we describe in this article,

improves the process by allowing for a more common and centralized format for administration. Though one can cus-tomize a survey to fit the needs of indi-vidual departments or instructors, the mechanics and logistics of survey administration can be standardized to increase the ease and convenience of administration for students. The stan-dardization works to reinforce percep-tions of institutional commitment to the SET process and can improve response rates owing to the reinforcement across multiple classes or departments.

A Case Study Using WebCT for Centralized SET Administration

We designed a case study to empiri-cally investigate two research questions: (a) Are student response rates (i.e., per-centage of students completing the SET instruments) negatively affected by moving SET administration online through a course management system? and (b) Will students provide signifi-cantly more qualitative feedback (i.e., more comments in response to open-ended questions) when SETs are admin-istered by means of a Web-based course management system? Our case study, conducted at a midsized (17,000 stu-dents on main campus) southwestern state university, provided the basis for an empirical analysis of SET adminis-tration using the WebCT course man-agement system. Staff members in three disciplines in the College of Business Administration and Economics agreed to administer the teaching evaluations using the WebCT system (accounting, business computing systems, and mar-keting). As a matter of policy, the uni-versity requires collection of SETs at the conclusion of each fall and spring semester. College policy also requires that each SET survey contain at least 10 questions relevant to instructional effec-tiveness, including two mandatory ques-tions regarding the overall quality of the course and the overall quality of the instructor. Specific selection of the remaining items is dependent on depart-mental policy or the needs of the indi-vidual faculty members. The course and instructor quality measures are used as summative measures of performance in annual performance reviews, whereas

(4)

the remaining items are chosen for for-mative assessment and instructional development purposes. In our case study, the instructors in the accounting and business computing systems disci-plines used a common survey instru-ment with 10 items. In the case of the marketing department, individual faculty members selected as few as eight and as many as 20 additional items to include on individual class SETs. They selected these items from a list of over 75 ques-tions or ratings items commonly used to evaluate teaching and approved by the College Teaching Excellence Commit-tee. The faculty members use the SET results for instructional improvement, and the administration uses the results as one of a set of assessment metrics for annual merit, promotion, and tenure decisions.

Departmental administrative assis-tants, hereafter referred to as site designers, created WebCT course sites for each discipline area (accounting, business computer systems, and market-ing) and created separate survey instru-ments for each class offered in the dis-cipline. The site designers named each WebCT course site in such a way as to ensure that students recognized the pur-pose and related content of each site (e.g., a course titled Department of Mar-keting Class Evaluations contained sur-veys for all marketing classes taught that term). All students taking one or more classes in a given discipline were granted access to the relevant site; the site administrator used the “Selective Release” function in the design of each class SET survey to ensure that only the students enrolled in a particular class had access to the SET instrument for that class.

Roughly 1 month before the end of the term, instructors distributed instruc-tions for the new evaluation procedure in both hard copy and e-mail formats. Students were asked to verify that they had access to the appropriate discipline evaluation Web sites and that they saw surveys listed for each class in which they were enrolled. The site designer did not release the surveys for comple-tion by the students until the final week of classes. The site designer assured the students that the instructor had no access to the evaluation Web sites, that

results of the survey would be provided to the instructors only after final grades were turned in, and that all responses were anonymously received and reported in aggregate to the instructors. Students were free to complete the SET instru-ments on their own time, taking as much time as they needed and completing them at any location that provided access to the Internet.

The survey function in WebCT guar-antees that no names are tied to survey reporting; at most, the designer (i.e., departmental assistant) can ascertain whether a student has completed an evaluation but cannot tie results to spe-cific persons or students. In this case study, the site designer managed the opening and closing of the release peri-od for the SETs and distributed sum-maries of the survey results to instruc-tors after grades were recorded with the registrar. The WebCT survey tool auto-matically calculated descriptive statis-tics for all closed-ended questions; the “View” option in the “Detail” for the survey provided a compressed listing of the students’ written comments. The feedback was then made immediately available to the professors for consider-ation in design of the courses for the fol-lowing semester.

Empirical Results From Case Study

To more systematically address the research questions, we compared our results with SET response patterns from the immediately preceding semester; SETs during that term were administered through the traditional in-class method. We used departmental records from that term to conduct a comparative analysis of the effects of administration mode on response rates. We performed a test of proportions to examine whether there was any significant difference between the response rates of 49 in-class SET administrations and 64 online SET administrations. (The college difference in the number of class sections was due to greater use of large sections in the fall term.) There were no significant differ-ences in the response rates based on dis-ciplines (F statistic = 1.270, p = .285). Although the response rate for the in-class administration of SETs was slightly

higher (75.9%) than that for online administration of SETs (70.1%), the dif-ference was not statistically significant (F test statistic = 2.659,p= .106).

We addressed our second research question by comparing student response patterns to open-ended questions in the in-class and online SET administration formats. Faculty members in the mar-keting department agreed to allow the departmental assistant to record quanti-tative information about student re-sponses to open-ended questions in the two formats (i.e., two consecutive terms). The open-ended questions used during each term were the same: One solicited students’ comments on what they particularly liked, and the other solicited their comments on things that they particularly disliked about a course or instructor. The departmental assistant used departmental copies of the SETs from the previous term and the respons-es from the online administration to record the necessary data. For each semester’s classes, the assistant reported the percentage of students in each class who provided feedback to either of the open-ended questions and the average number of words given, per class, in response to an open-ended question. On average, 72% of the students provided comments in response to open-ended questions when the SET was adminis-tered in class, whereas 87% responded when the SET was administered online; this difference was marginally signifi-cant (Ftest statistic = 3.930,p= .058). Interestingly, the most significant differ-ence was in the amount of feedback given in response to the open-ended questions. The in class administration yielded results in which students gave an average of 7.83 words of feedback per question, whereas students in the online administration gave nearly four times as much feedback, with an aver-age of 28.97 words provided per ques-tion (Ftest statistic = 13.944,p= .001).

Discussion

Our results suggest that the proposed method of CMS administration provides significant benefits for improving the summative value of SETs. First, the sys-tem provides a mechanism for providing greater guarantee of anonymity to the

(5)

student respondents. In our case study, this guarantee was likely a significant factor contributing to the increase in response rates for online submissions. The standardization of the format pro-vided through a centralized collection process also enhances institutional com-mitment to the process, which further alleviates previous problems in SET col-lection procedures.

This method of administration also provides significant benefits that increase the formative value of SETs. The procedure described in this article allows the designer to customize course surveys with relative ease according to the individual needs of the instructor. This flexibility is important for improving formative feedback. Specifically, Centra (1993) argued that the formative value of SETs is predi-cated on the extent to which the evalu-ation process provides new informa-tion to the instructor, which he or she perceives as feasible and consequential in making meaningful improvements to the learning environment. Similarly, Fuhrman and Grasha (1983) suggested that evaluations must be clearly focused and deal with the individual instructor’s personal goals for the course. This requirement clearly neces-sitates that some flexibility be built into the design process to accommo-date diversity in instructional goals and learning environments.

The increased willingness of students to provide voluntary comments in an online administration is likely attribut-able to the reduced time constraints asso-ciated with completion of the SETs. Svinicki (2001) noted that students are more likely to give constructive qualita-tive feedback when they are given ade-quate time to reflect on the questions and the course environment, a condition that is not characteristic of most in-class SET administrations. The timing factor is also relevant with regard to the expediency of returning feedback to the instructor. In this case study, the designer summarized the feedback and returned it to the instructors immediately after the semes-ter was completed, a condition that allows instructors to incorporate

feed-back into design of courses in the subse-quent semester. Although this case study focused on end-of-semester evaluations, the process can be clearly improved through the inclusion of midsemester evaluations as well.

Last but not least, the ease of down-loading the data into statistical analysis frameworks provides the opportunity for a more informed analysis of student responses. As mentioned earlier, many sources of bias in SET administration are known to exist. Closer scrutiny of the bias effects is possible when administra-tors or faculty members have the ability to easily transfer SET data into other data formats using the “Download” function provided in most CMS survey tools. Similarly, suggestions for improving the formative value of open-ended feedback can be more easily implemented when the data are readily accessible for trans-fer into content analysis software pro-grams or other qualitative data analysis tools (Lewis, 2001).

Conclusions

In this article, we examined a method of using a course management system to administer student evaluations of teach-ing. The method provides a greater guarantee of anonymity to the student respondents. We reported on a case study in which this guarantee was likely a significant factor contributing to the increase in response rates for online submissions. In addition, the results of our case study suggest that the method provides significant benefits for improv-ing both the summative and formative value of SETs.

REFERENCES

Austin, D., & Austin, J. (2002, April). Using Blackboard to survey students at midterm. Paper presented at the Seventh Annual Mid-South Instructional Technology Conference. Retrieved November 24, 2003 from http:// www.mtsu.edu/~itconf/proceed02/91.html Baum, P., Chapman, K., Sommeyer, C., & Hanna,

R. (2001, June 14–17). On-line versus in-class student evaluations of faculty.Paper presented at the Proceedings of the Hawaii Conference on Business, Honolulu, HI.

Cashin, W. E. (1995). Student ratings of teaching: The research revisited.Manhattan, KS: Center

for Faculty Evaluation and Development, Kansas State University.

Cashin, W. E., & Downey, R. G. (1992). Using global student rating items for summative evalu-ation. Journal of Educational Psychology, 84, 563–572.

Centra, J. A. (1993). Reflective faculty evaluation: Enhancing teaching and determining faculty effectiveness. San Francisco: Jossey-Bass. Dommeyer, C. J., Baum, P., & Hanna, R. W.

(2002). College students’ attitudes toward methods of collecting teaching evaluations: In-class versus on-line. Journal of Education for Business, 78, 11–15.

Feldman, K. A. (1989). Instructional effectiveness of college teachers as judged by teachers them-selves, current and former students, colleagues, administrators, and external (neutral) observers. Research in Higher Education, 30, 137–174. Fuhrman, B. S., & Grasha, A. F. (1983). A

practi-cal handbook for college teachers. New York: Little, Brown.

Green, K. (2001). The 2001 National Survey of Information Technology in U.S. Higher Educa-tion. Retrieved April 3, 2004, from http:// www.campuscomputing.net

Hobson, S. M., & Talbot, D. M. (2001). Under-standing student evaluations. College Teaching, 49, 26–31.

Layne, B. H., DeCristoforo, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Research in Higher Edu-cation, 40(2), 221–232.

Lewis, K. G. (2001). Making sense of student written comments. New Directions in Teaching and Learning, 87, 25–32.

Marsh, H. W. (1984). Students’ evaluations of uni-versity teaching: Dimensionality, reliability, validity, potential biases, and utility. Journal of Educational Psychology, 76, 707–754. McKeachie, W. J. (1987). Instructional evaluation:

Current issues and possible improvements. Journal of Higher Education, 58, 344–350. Olsen, D. R., Wygant, S. A., & Brown, B. L.

(1999, October). Entering the next millennium with Web-based assessment: Considerations of efficiency and reliability.Paper presented at the Conference of the Rocky Mountain Association of Institutional Research, Las Vegas, NV. Ory, J. C., & Braskamp, L. A. (1981). Faculty

per-ceptions of the quality and usefulness of three types of evaluative information. Research in Higher Education, 15, 271–282.

Svinicki, M. V. (2001). Encouraging your students to give feedback. New Directions in Teaching and Learning, 87, 17–24.

Tiberius, R. G., Sackin, H. D., & Cappe, L. (1987). A comparison of two methods for eval-uation teaching. Studies in Higher Education, 12, 287–297.

Tomsic, M. L., Hendel, D. D., & Matross, R. P. (2000). A World Wide Web response to student satisfaction surveys: Comparisons using paper and Internet formats. Paper presented at the Annual Meeting of the Association for Institu-tional Research, Cincinnati, OH.

WebCT, Incorporated (2003). Learning without limits: Flexible e-learning solutions for institu-tions across the educational spectrum. Re-trieved April 1, 2004 from http://www. webct.com/service/ViewContent?contentID= 17980017

Referensi

Dokumen terkait

Berdasarkan Surat Penetapan Pemenang Nomor : 10/ULP/BPMPD/LS-DS/2012 tanggal 5 Juni 2012, dengan ini kami Pokja Konstruksi pada Badan Pemberdayaan Masyarakat dan

48/VII Pelawan II pada Dinas Pendidikan Kabupaten Sarolangun Tahun Anggaran 2012 , dengan ini diumumkan bahwa

Mengingat sebuah organisasi nirlaba (OPZ) tanpa menghasilkan dana maka tidak ada sumber dana yang dihasilkan. Sehingga apabila sumber daya sudah tidak ada maka

Berdasarkan Surat Penetapan Pemenang Nomor : 44.i /POKJA /ESDM-SRL/2012 tanggal 15 Agustus 2012, dengan ini kami Pokja Konstruksi pada Dinas ESDM Kabupaten

[r]

RKB Ponpes Salapul Muhajirin Desa Bukit Murau pada Dinas Pendidikan Kabupaten Sarolangun Tahun Anggaran 2012, dengan ini diumumkan bahwa :.. CALON

Bertitik tolak dari latar belakang pemikiran tersebut di atas, maka masalah yang sangat pundamental diteliti dan dibahas dalam rangkaian kegiatan penelitian ini

[r]