• Tidak ada hasil yang ditemukan

Objective Measures of Critical Thinking

Dalam dokumen Teaching Critical Thinking in Psychology (Halaman 85-99)

Academic Profile (1998)

Higher Education Assessment, Educational Testing Service (ETS), Princeton, NJ 08541

Target: Students at the end of their second year in college, though probably usable at other levels.

Format: A multiple-choice test assessing college-level “reading, writing, critical think- ing, and mathematics within the contexts of the humanities, social sciences, and natu- ral sciences.” Short form: 36 items in 40 mins; long form: 144 items in 2 hrs 30 mins.

Assessment of Reasoning and Communication (1986)

College Outcome Measures Program, ACT, PO Box 168, Iowa City, IA 52243 Target: Students finishing college, but probably usable with other levels as well.

Format: Open-ended, requiring student to produce three short essays and three short speeches. Yields total subtest score plus part scores in social reasoning, scientific reasoning, and artistic reasoning.

9781405174039_4_006.indd 72

9781405174039_4_006.indd 72 6/27/2008 4:37:24 PM6/27/2008 4:37:24 PM

73

The Challenge of Assessing Critical Thinking

The California Critical Thinking Skills Test: College Level (1990), by Peter Facione The California Academic Press, 217 LaCruz Ave, Millbrae, CA 94030

Target: Aimed at college students, but probably usable with advanced and gifted high school students.

Format: Multiple-choice, incorporating interpretation, argument analysis and appraisal, deduction, mind bender puzzles, and induction (including rudimentary statistical inference).

Web site: http://www.insightassessment.com/test-cctst.html

The California Critical Thinking Dispositions Inventory (1992) by Peter and N. C. Facione

The California Academic Press, 217 LaCruz Ave., Millbrae, CA 94030 Target: College age, adults, professionals

Format: A multiple-choice attempt to assess critical thinking dispositions. Probably useful for self-appraisal and anonymous information for use in research.

Web site: http://www.insightassessment.com/test-cctdi.html

Cornell Critical Thinking Test, Forms X & Z (1985), by Robert H. Ennis and Jason Millman

Critical Thinking Press and Software, PO Box 448, Pacific Grove, CA 93950

Target: Form X: Grades 4–14; Form Z: College students and adults, but usable with advanced or gifted high school students.

Format: Form X: multiple-choice, sections on induction, credibility, observation, deduction, and assumption identification. Form Z: multiple-choice, sections on induction, credibility, prediction and experimental planning, fallacies (especially equivocation), deduction, definition, and assumption identification.

Web site: http://www.criticalthinking.com/getProductDetails.do?code=c&id=05512 Cambridge Thinking Skills Assessment (1996)

Local Examinations Synd, U Cambridge, Syndicate Building, 1 Hills Road, Cambridge CB1 2EU, UK

Target: Postsecondary students

Format: Two parts: a 30 min 15-item, multiple-choice test of argument assessment; and a 1 hr essay test calling for critical evaluation of an argument and for further argumentation.

Web site: http://tsa.ucles.org.uk/index.html

Critical Thinking Interview (1998), by Gail Hughes and Associates 141 Warwick St. S.E., Mpls., MN 55414 (e-mail: [email protected]) Target: College students and adults

Format: About 30 mins for a one-to-one interview combining displayed knowledge and reasoning on topic of interviewee’s choice. Emphasis is on clarity, context, focus, credibility, sources, familiarity with the topic, assumption identification, and appropriate

9781405174039_4_006.indd 73

9781405174039_4_006.indd 73 6/27/2008 4:37:24 PM6/27/2008 4:37:24 PM

74

use of such reasoning strategies as generalization, reasoning to the best explanation, deduction, values reasoning, and reasoning by analogy.

Critical Thinking Test (1989)

ACT CAAP Operations (85), PO Box 1688, Iowa City, IA 52243

Target: Students at the end of their second year in college, though probably usable at other levels.

Format: Multiple-choice items based on text readings: identifying conclusions, inconsistency, and loose implications; judging direction of support, strength of reasons, and representativeness of data; making predictions; noticing other alternatives; and hypothesizing about what a person thinks.

Ennis–Weir Critical Thinking Essay Test (1985), by Robert H. Ennis and Eric Weir Critical Thinking Press and Software, PO Box 448, Pacific Grove CA 93950

Target: General use

Format: Incorporates getting the point, seeing the reasons and assumptions, stating one’s point, offering good reasons, seeing other possibilities (including other possible explanations), and responding to and avoiding equivocation, irrelevance, circularity, reversal of an if–then (or other conditional) relationship, overgeneralization, credibility problems, and the use of emotive language to persuade.

Web site: http://faculty.ed.uiuc.edu/rhennis/tewctet/Ennis-Weir_Merged.pdf ICAT Critical Thinking Essay Test (1996)

The International Center for the Assessment of Thinking, PO Box 220, Dillon Beach, CA 94929

Target: General use

Format: Provides eight criteria (to be shown to students in advance and also to be used for grading by trained graders). Students respond to an editorial (selected by test admin- istrator) by writing an essay summarizing it, identifying its focus, and commenting on its strengths and weaknesses.

Web site: http://www.criticalthinking.org/about/internationalCenter.shtml Measure of Academic Proficiency and Progress (MAPP)

Educational Testing Service

Target: College but specifically helpful for general education assessment

Format: It allows institutions to measure proficiency in reading, writing, critical thinking, and mathematics; no need for separate tests and multiple administrations.

Reading and critical thinking are measured in the context of the humanities, social sciences and natural sciences.

Web site: http://www.ets.org/portal/site/ets/menuitem.1488512ecfd5b8849a77b13bc3921509/

?vgnextoid=ff3aaf5e44df4010VgnVCM10000022f95190RCRD&vgnextchannel=f985 46f1674f4010VgnVCM10000022f95190RCRD

9781405174039_4_006.indd 74

9781405174039_4_006.indd 74 6/27/2008 4:37:25 PM6/27/2008 4:37:25 PM

75

The Challenge of Assessing Critical Thinking Reflective Judgment Approach

University of Minnesota Target: General use

Format: Analysis of faulty logic

The Test of Everyday Reasoning (1998) by Peter Facione California Academic Press, 217 La Cruz Ave., Millbrae, CA 94030 Target: General use

Format: Derived from The California Critical Thinking Skills Test (listed above), with choices of justifications added. Multiple-choice.

Watson–Glaser Critical Thinking Appraisal (1980) by Goodwin Watson and E M. Glaser

The Psychological Corporation, 555 Academic Court, San Antonio TX 78204 Target: General use

Format: Multiple-choice, sections on induction, assumption identification, deduction, judging whether a conclusion follows beyond a reasonable doubt, and argument evalu- ation plausibility, reasonableness, and realism of student responses; graded on the basis of the number of responses judged successful (from 0 to 4). Yields total subtest score plus part scores in social reasoning, scientific reasoning, and artistic reasoning.

Web site: http://harcourtassessment.com/haiweb/cultures/en-us/productdetail.htm?pid=

015-8191-013

Adapted from An Annotated List of Critical Thinking Tests, prepared by Robert H.

Ennis, University of Illinois.

http://www.criticalthinking.net/CTTestList1199.html

9781405174039_4_006.indd 75

9781405174039_4_006.indd 75 6/27/2008 4:37:25 PM6/27/2008 4:37:25 PM

Chapter 7

Programmatic Assessment of Critical Thinking

Kevin J. Apple, Sherry L. Serdikoff, Monica J. Reis-Bergan, and Kenneth E. Barron

Assessing critical thinking is a difficult task because the construct is not easy to define.

In our programmatic assessment of critical thinking, we strive to assess different components of this construct. Our approach is similar to the Indian parable of the Blind Men and the Elephant (Saxe, 1878). According to this parable, a group of blind men examined an ele- phant. Each man touched a different part of the elephant’s body and thus had a different impression about the elephant. For example, one of the men touched the long, sharp tusk.

This person was convinced the elephant was like a spear. The individual who touched the side of the elephant was convinced that the elephant was like a wall. The individuals who touched the trunk, leg, ear, or tail insisted that the elephant was similar to a snake, tree, fan, or rope, respectively. According to the parable, these blind men argued about the true nature of the elephant. Each man insisted that he was right, without realizing that the other descriptions of the animal were accurate for a different section of the elephant. If the men cooperated with each other and pieced together an image of the elephant based on each other’s experiences, they would have created a more accurate image of the elephant.

One lesson from this parable is that multiple measures of a construct are better than a single measure (Campbell & Fiske, 1959). Although critical thinking is difficult to define, we strive to measure it accurately by assessing different components of it. Specifically, we attempt to get an accurate measure by assessing students’ abilities at different times with different measures. In this chapter, we will examine how we assess critical thinking at various points during our students’ education.

The Assessment Culture at James Madison University

James Madison University (JMU) has a unique assessment culture (Stoloff, Apple, Barron, Reis-Bergan, & Sundre, 2004). As part of University policy, all programs (including non- academic) assess their effectiveness on a yearly basis. In addition to collecting assessment

9781405174039_4_007.indd 77

9781405174039_4_007.indd 77 6/27/2008 4:37:17 PM6/27/2008 4:37:17 PM

Teaching Critical Thinking in Psychology: A Handbook of Best Practices Edited by D. S. Dunn, J. S. Halonen, and R. A. Smith

© 2008 Blackwell Publishing Ltd. ISBN: 978-1-405-17402-2

Kevin J. Apple et al.

78 78

data each year, faculty members use the assessment data to inform departmental decisions.

To facilitate systematic assessment, we assess our students at three stages during their academic careers: beginning, middle, and end. The first student assessment occurs before they begin classes as freshmen. Students complete their midcareer assessment during their sophomore/junior year on Assessment Day: a day during mid-February each year when classes are canceled so students can complete their assessments. These first two assessment batteries focus on students’ mastery of general education learning objectives. Finally, students complete departmental assessments during their senior year on Assessment Day.

The senior-year assessment focuses on students’ mastery of the learning objectives for their individual majors.

Assessing Critical Thinking in General Education

Like most U.S. universities, JMU has a core curriculum that all undergraduate students complete regardless of majors, minors, or preprofessional programs. Faculty have arranged the general education curriculum into clusters of courses arranged into five educational themes fundamental to becoming a well-educated student:

Cluster One: Skills for the 21st Century (3 courses)

Cluster Two: Arts and Humanities (3 courses)

Cluster Three: Natural World (3–4 courses)

Cluster Four: Social and Cultural Processes (2 courses)

Cluster Five: Individuals in the Human Community (2 courses)

Critical thinking is assessed in both Clusters One and Three. As part of the assessment culture at JMU, we are able to benefit from the data our general education colleagues collect. For Cluster One (Skills for the 21st Century), all students must take one of five courses designed with the explicit purpose of addressing critical thinking. The assessment plan for this set of courses has been evolving. Over the years, faculty members have used various standardized tests, such as the Cornell Critical Thinking Test – Level Z (The Critical Thinking Company, n.d.) with moderate satisfaction. Since 2005, faculty have been using the Comprehensive Test of Critical Thinking (CTCT; James Madison University, Center for Assessment and Research Studies, 2006), developed by Philosophy faculty at JMU who specialize in critical thinking. They designed the test to probe stu- dents’ understanding of claims, credibility, conclusions, evidence, and argument. The CTCT consists of 55 multiple-choice items that have been linked to Cluster One learning objectives. The Cronbach’s alpha for this test was a = .66 (Fall, 2005) and a = .70 (Spring, 2007). Students completing this test before starting classes (M = 27.6, SD = 5.72) in the Fall of 2005 scored significantly lower than students during their midcareer assessment (M = 29.8, SD = 6.12) during Spring 2006, t(888) = 5.51, p < .001. This increase in critical thinking scores may be attributed to the coursework students have completed since beginning JMU. The Center for Assessment and Research Studies have shared the data with the faculty who teach the critical thinking classes, along with more detailed analyses

9781405174039_4_007.indd 78

9781405174039_4_007.indd 78 6/27/2008 4:37:17 PM6/27/2008 4:37:17 PM

79

suggesting where students may need more help or where the curriculum could be adjusted to better address core concepts in critical thinking.

In addition to Cluster One data, we also benefit from Cluster Three (Natural World) data collected by our colleagues. It is our position that critical thinking and scientific rea- soning are at least related, that improvements in scientific reasoning constitute to some extent improvements in critical thinking, and that measures of our students’ scientific reasoning can inform us about their critical thinking. In particular with respect to training psychology majors at JMU, we believe that our students’ critical thinking is enhanced as a result not only of general education coursework designed to address knowledge, skills, and abilities (KSAs) related to critical thinking specifically (i.e., Cluster One, Skills for the 21st Century), but also as a result of coursework designed to address KSAs related to scientific reasoning (i.e., Cluster Three, the Natural World). From this argument, it follows that we also can use measures to assess scientific reasoning as a measure of our students’ critical thinking.

The Cluster Three (Natural World) requirements include a math course and science courses to establish quantitative and scientific literacy. The general education program is intended to provide all students with foundational KSAs on which they can build more specialized KSAs from their majors, minors, and preprofessional programs.

The Natural World (NW) assessment instrument consists of 50 objective answer ques- tions. Reliability has steadily improved with each revision of the instrument. Our assessment specialists selected the best items from earlier administrations to form the fifth version, NW–5. The NW–5 showed the best reliability to date, with a = .67 for the freshmen and a = .75 for sophomores (Horst, Lottridge, Hoole, Sundre, & Murphy, 2005).

To examine scientific thinking in our psychology majors, we examined how psychology majors performed on the NW–5 during two test administrations: Fall 2001 and Spring 2003. Forty-one psychology majors completed the NW–5 testing during Fall 2001, and 70 psychology majors completed the NW–5 during Spring 2003. Of these two groups, 22 overlapped, so we were able to look at independent as well as dependent group differ- ences over time.

Figure 7.1 shows the summary data for performance on the NW–5 test. Because each correct answer on the 50-item test was awarded 2 points, students could obtain a score

50 55 60 65

Independent Repeated

Group Freshmen Midcareer

Mean (+/ SE) Score

Figure 7.1. Mean (±SE) NW–5 scores for incoming freshman and mid-career students.

9781405174039_4_007.indd 79

9781405174039_4_007.indd 79 6/27/2008 4:37:17 PM6/27/2008 4:37:17 PM

Kevin J. Apple et al.

80

from 0 to 100. The bars on the left represent the mean (±SE) test score for independent groups of psychology majors who took the test before beginning classes or during their midcareer assessment (summarizing independent group differences), and the bars on the right represent scores for students who took the test both as incoming freshmen and as midcareer students (summarizing dependent group differences). In both cases, the mid- career psychology students performed better than the incoming psychology students. This difference in performance was confirmed by an independent-samples t test for the inde- pendent groups, t(65) = −2.49, p = .015, d = .68, but for the smaller subset of students in which we could link scores over time a dependent-samples t test failed to confirm a statis- tically significance difference for the repeated group, t(21) = −.99, p = .33, d = .25.

The significantly higher scores of midcareer students are consistent with an increase in scientific reasoning among our psychology majors over their first two years at JMU.

However, there are several caveats. First, this difference was significant only for the inde- pendent groups; data from the repeated group, which represents actual growth over time for a group of students, failed to reach statistical significance (although this could be a function of the small sample size, n = 22). Second, there are a number of reasons that midcareer students may do better that are not specific to scientific reasoning skills (e.g., student maturation, number of courses completed, the loss from the university of those students with the lowest aptitude for science and mathematics).

We did not confirm a statistically significant amount of improvement within the small group of individuals who repeated the test. However, the fact that scores for the depen- dent group changed in a positive direction and the fact that the scores for midcareer students in the independent group were statistically higher than those of the freshmen is support for the hypothesis that our psychology majors’ scientific reasoning does improve over their first two years at JMU, and this effect ranges from small (d = .25) to moderate (d = .68). Furthermore, to the extent that scientific reasoning is related to critical thinking, these data support the assertion that our students’ critical thinking skills improve over that time.

Like the data from the CTCT, faculty who teach courses in Cluster Three of the general education program receive data from the NW assessment, and they have used this infor- mation to make changes in coursework to better address students’ needs in this area.

Additionally, faculty have continued to improve the assessment instrument used to assess scientific reasoning; faculty currently administer version 8 of the NW test.

Overall, the current data are encouraging, and we view these assessments from our general education program as informative. Critical thinking is not the province of psy- chology alone, and to the extent that other sectors of our university curriculum address these issues, using assessments of those experiences can provide us meaningful data. By looking beyond our specific psychology curriculum we get a more complete picture of our psychology students’ KSAs. Furthermore, it may be possible to use these more gen- eral tests to examine specific components of the psychology curriculum. For example, the NW test may be an appropriate tool for assessing our students’ skills before and after completing our statistics and research methods course sequence. Thus, although designed to assess more general skills, these instruments may have utility that is specific to the psychology major.

9781405174039_4_007.indd 80

9781405174039_4_007.indd 80 6/27/2008 4:37:17 PM6/27/2008 4:37:17 PM

81

Assessing Critical Thinking in the Psychology Major

In addition to assessing critical thinking of our psychology students engaged in general education coursework, we also test critical thinking during the end of our students’ under- graduate careers. These senior assessments focus on the KSAs of the psychology major. The instruments we use for these assessments focus on critical thinking as well as other impor- tant learning goals and outcomes for the psychology major (see Halonen et al., 2002).

Assessment of Critical Thinking Using Behavioral Checklists

The Academic Skills-Experience Inventory (ASI; Kruger & Zechmeister, 2001) measures 10 skill areas relevant to the goals of a psychology major and liberal arts education. Each skill area has 9 questions, so the entire scale consists of 90 questions. Each question describes a specific behavior, and the student must select either “applies to me” or “does not apply to me.” Although this 90-item scale is long, students are able to fill it out quickly because they are making only a dichotomous choice for each item.

One of the 10 skill areas is critical thinking/problem solving. The critical thinking com- ponent has 3 sections: evaluating research studies, evaluating costs/benefits, and taking human biases into account when making decisions. Each of these sections has 3 items. For example, one of the items relevant to evaluating research studies is “I have written a cri- tique of a published research study.” The possible range on this scale is 0–9 with higher numbers reflecting that the student has engaged in more of these activities. Kruger and Zechmeister (2001) reported that seniors scored significantly higher on the critical think- ing items than first-year students as a result of their educational experiences.

We have found this instrument helpful for measuring critical thinking for several rea- sons. First, we are able to compare the critical thinking scores of our seniors to students from other schools. Based on Kruger and Zechmeister’s (2001) article, we know that psy- chology seniors at JMU (M = 4.91, SD = 1.67) report similar critical thinking experiences as the students at Loyola University of Chicago (M = 4.59, SD = 2.06), t(232) = .41, p > .05.

Second, we are able to measure whether changes to our major will have an impact on the critical thinking scores. We have recently modified our psychology curriculum. Because we have been using the ASI for several years, we will be able to determine if these changes to our program impact students’ critical thinking scores. Because this is just the first year of the new major, it is too early to tell if the new curriculum will increase students’ critical thinking experiences. However, we do have an assessment strategy in place to measure any changes that may occur.

Assessment of Critical Thinking Using Student Reflections

In 2002, the Task Force on Undergraduate Psychology Major Competencies appointed by the Board of Educational Affairs of the American Psychological Association published

9781405174039_4_007.indd 81

9781405174039_4_007.indd 81 6/27/2008 4:37:17 PM6/27/2008 4:37:17 PM

Dalam dokumen Teaching Critical Thinking in Psychology (Halaman 85-99)

Dokumen terkait