• Tidak ada hasil yang ditemukan

Evaluating Design Thinking

such as the Learning Activity Management System (LAMS) (Cameron,2006), they can potentially use these to support the reflection of their design decisions and processes.

7.3.3 Developing Design Dispositions

Whether designers are born or made remains a debatable issue. Nevertheless, our studies of teachers show that their perceived confidence in design dispositions had significant positive influence on their perceived confidence for ICT lesson design practices (Koh et al., in press). While design thinking is associated with the practices and strategies employed to design, designers’dispositions are affective considerations that could influence how they approach design. For example, teachers’ beliefs could constitute greater barriers for the integration of ICT as compared to barriers related to the lack of resources or equipment (Ertmer, 2005). In teacher training, strategies such as instructor modeling and hands-on practice are important for building teachers’acceptance and confidence for using ICT tools (Brush et al.,2003; Koh & Divaharan,2011). Koh, Chai, and Lee (2013) found that multiple design cycles are useful for influencing teachers’confidence to design. For designers such as teachers and students who may have apprehensions towards design, these strategies may be useful in helping them to build confidence for engaging in design work. Across time, this may have positive influence in changing their dispositions to design.

In summary, several methods appear to be important for developing design thinking. There is a need for clear articulation and sequencing of design problems to promote recognition of design patterns, multiple opportunities to design, reflec- tion activities that help designers to create understanding of their design decisions as well as instructional strategies to support the development of confidence, and dispositions for design work.

7.4.1 Perceptions of Design

Designers’perception of their design practices can be used to provide insights about the extent they are able to think like designers. In a study of 32 undergraduate design students, the measures of design (MOD) protocol was used to characterize their design conceptions (Carmel-Gilfilen & Portillo, 2010). This protocol com- prises nine essay questions to elicit comments about respondents’perceptions of areas such as design processes, project perceptions, and design evaluation. Using trained raters, the respondents’answers were rated to determine the extent to which they were what Perry (1968) described as dualistic thinkers who were more comfortable with defined options for design or multiplistic thinkers who preferred exploring multiple options and challenging constraints. The study found that multiplistic thinkers tend to have better studio performance.

Such general classifications of design expertise have not been used in educa- tional studies where surveys are more often being used. Koehler and Mishra (2005) studied the design behaviors of four faculty and 14 graduate students through progress surveys as they designed online learning courses. The 33 Likert-scale items of the survey examined the respondents’perceptions of their course experi- ences with respect to the learning environment, time and effort spent on design project, learning and enjoyment, group functioning, beliefs about online education, and the extent to which the respondents considered technological, pedagogical, and content issues both individually and as a group. It was found that the groups progressed from initial discomfort with the open-endedness and messiness of the design project to finding it a fulfilling learning process. Consequently, the respon- dents also reported improved perceptions about being able to consider technolog- ical, pedagogical, and content issues integratively as they design. Obviously, such kinds of evaluation can provide in-depth understanding of the shifts in learners’ design experiences and are informative for helping instructors to support the stress and strains that learners experience during design. Nevertheless, such kinds of measurements are course specific, and there is limited scope for generalization and comparison across different course contexts.

Koh et al. (in press) developed an 18-item survey to assess teachers’perceptions of their design dispositions, lesson design practices for supporting constructivist- oriented ICT integration, and knowledge of ICT integration or TPACK. The survey was validated with confirmatory factor analysis with 201 Singapore teachers.

Structural equation modeling showed that teachers’ perceptions of ICT lesson design practices had stronger positive influence on their TPACK than their design dispositions. These findings suggest that teachers’confidence of their TPACK is better enhanced by raising their confidence for engaging in the ICT lesson design practices that support constructivist learning rather than influencing their design dispositions. The findings also imply that ICT education for teachers needs to focus on the methods that develop teachers’confidence to engage in the desired lesson design practices. This validated survey lends itself to easy replication across course contexts where respondents’ratings could be measured and compared. Yet, it still

7.4 Evaluating Design Thinking 115

suffers from the limitation common to self-reported data—that perception may not be representative of practice. Design perceptions alone cannot fully characterize design expertise.

7.4.2 Design Process

The importance of studying design processes has been recognized, but the chal- lenge faced is in its representation and interpretation. To characterize the design processes undertaken by groups of graduate students who were designing lessons for e-learning, Koehler, Mishra, and Yahya (2007) used content analysis to map their design discussions. By tabulating the content, the length, and the turns of conversation within these discussions, these authors found that their design pro- cesses were influenced by the presence of different group members as well as the interpersonal dynamics within the team. Koh, Chai, and Tay (2014) also used content analysis to code the design talk collected from three teacher groups throughout a school year with respect to the issues discussed. They used chi-square analysis to examine the statistical differences among the coded catego- ries and found that when logistical issues dominated teachers’talk, it curtailed the group’s propensity to engage in lesson design. Furthermore, the facilitators and profile of group members influenced the extent to which the group was able to engage in productive design.

The two studies described so far show attempts to understand the process of design thinking in educational contexts. Through content analysis, some insights have been gained regarding the kinds of design issues and the processes of group- based lesson design. Nevertheless, the characterization and identification of effec- tive design processes remain challenging. In design fields, the technique of linkography has been used in protocol analysis to examine how designers generate ideas (Goldschmidt & Weil,1998). This is done by breaking down design conver- sations into design moves which are defined as steps that advance the design situation (Goldschmidt, 1995). Linkographs are then created by determining if there are backlinks (connections to previous moves) or forelinks (connections to subsequent moves) behind design moves. Graphical representations of the design process can then be made, and link indexes comprising ratios between the number of links and number of moves can be computed to describe design activity. As compared to the statistical approach adopted by Koh et al. (2014), linkography has the advantage where the number of designers and the quantity of talk contributed by each designer does not affect the frequency of moves coded (Kan & Gero,2008).

This is because linkographs are concerned with discerning important moves and the relationships among moves. While linkographs provide some promise for the representation and evaluation of design processes, the development and interpreta- tion of link indexes with respect to different kinds of design activities are still in progress (Kan & Gero,2005). It nevertheless is a method that can provide some insights about how the productivity of the design can be characterized.

7.4.3 Design Outcomes

In fields such as product design, design outcomes are typically apparent through information on customer satisfaction or product sales. In the field of education where design tends to be a means of learning professional or content knowledge, the design outcomes to be measured are tied to the educational aims of lesson activities.

For example, for students, this could typically comprise their understanding of content knowledge and perhaps the display of important social and collaborative skills if the design project is group based. Therefore, evaluations of design out- comes in such cases are contextually driven. A larger aim of learning by design is content mastery. At times, student’s exam performance can also become an impor- tant indicator of design outcomes.

In terms of teachers, design outcomes can be assessed through ratings of their lesson plans or the implementation of the designed lessons. Rubrics provide some form of design goals and can also be used as a means to help teachers steer their design decisions towards the desired kinds of design practice. An example can be found in Angeli and Valanides’(2009) guidelines for assessing the design of ICT lessons. These authors assert that the integration of ICT tools have more impact when used with content areas that are challenging for students. Their guidelines therefore emphasize the need to select ICT tools by identifying the kinds of content transformations it can potentially bring to the lesson. Another example can be found in Collins and Halverson’s (2010) rubric that was described in Chap.5. This rubric describes the kinds of activities associated with didactic- and constructivist- oriented lessons. It allows one to assess the extent to which teachers are able to conceptualize constructivist-oriented lessons. A third example of lesson design rubrics is Koh’s (2013) articulation of the guidelines for assessing the five dimen- sions of meaningful learning (active, constructive, intentional, cooperative, and authentic) with ICT developed by Howland, Jonassen, and Marra (2012). These dimensions emphasize that meaningful learning occurs through learners actively directing their own learning by manipulating data and ICT tools to construct meaning either individually or in a group. The rubric provides guidelines of how each dimension can be implemented at different levels. It establishes guidance for the design of ICT lessons by suggesting that teachers can choose to implement one dimension (e.g., intentional learning) at a higher level than another (e.g., collabo- rative learning). One advantage of using rubrics to assess lesson design is that the same rubrics can also be used to guide the assessment of lesson implementation. In this way, the chain of design activities from design to implementation can be assessed consistently.

In summary, design thinking has been evaluated from various facets including designers’perceptions, the processes they go through, as well as the outcomes of design. In educational contexts, there appear to be no commonly accepted methods for evaluating these three aspects.

7.4 Evaluating Design Thinking 117