Chapter 1. Introduction 1
2.1 Introduction
highlights human-machine collaboration and promotes examiners' decision-making process to frame effective questions. It attempts to reduce uncertainty of examiners and assists in quick decisions to include creativity features in their questions by providing feedback on whether a question is creative.
Highlights
• Human-centred design approach to identify parameters of questions that has the potential to instigate creative responses from students.
• Proposing a computational design model for assessing creative questions.
• Implementing the model using various tools and algorithmic techniques.
• Validating the model by measuring the inter-rater agreement.
associated with mini creativity or personal or psychological creativity, where creativity is evaluated relative to other responses (Csikszentmihalyi & Wolfe, 2014).
Extensive methods of creativity tests are reported in the literature. Many creativity tests lack reliability and validity in recognizing creativity, while others are not meant to be used in the context of examination. But in examination context, experts may decide to choose a combination of these techniques to extract creativity from different perspectives. Literature highlights some of the creativity tests like Consensual Assessment Technique (CAT) that evaluate products, art, theory, or artifacts based on expert’s opinion (Pritzker & Runco, 2011).
Remote Associates Test (RAT) tests creativity based on divergent thinking. It examines the degree of unrelated ideas combined to form a coherent whole of an idea. One of the significant tests reported in literature is the Torrance Tests of Creative Thinking (TTCT), which is similar to the tests that are conducted for selecting students in design schools. It scrutinizes creativity from verbal and figural perspectives. The verbal part is checked for one’s creativity by analyzing words with which they frame narrations. In contrast, the figural part is tested based on the usage of visual elements, completeness of the art, and degree of modifiability of visual elements (Kaufman et al., 2012).
Some tests are related to examining creativity of children, like Wallach and Kogan’s method of creativity testing, which desires to test creativity and intelligence of fifth-standard students (Silvia, 2008). Similarly, Getzel and Jackson’s study reveals the fact of testing sixth-grade gifted creative students (Getzels & Jackson, 1962). These categories of tests are associated with testing intelligence and divergent thinking of children. However, creativity of adults is different from creativity of children. An artifact created by children may seem creative, but the same developed by an adult might not appear creative. However, each of these tests is dependent on examiners’ choice and persuasion.
A highly consistent selection process is essential in order to qualify students on a large scale entrance examinations conducted in India and other countries. This study highlights Indian context due to the large population of students participating in entrance examinations. Most of the tests associated with Design education have objective and subjective question structure.
The objective part consists of questions related to numerical answer type, multiple-choice, and multiple select questions, whereas the subjective part contains questions associated with sketching, form sensitivity, visual sensitivity, and problem identification that attempts to
capture creativity of students (Bombay, 2021b, 2021a). The solutions to objective questions are straightforward and based on a strict set of options, whereas creative responses are marked by the usefulness of ideas and comparison of novelty relative to other responses (Sarkar &
Chakrabarti, 2011). Subjectivity evaluation is based on individual persuasion and is relatively complex than the objective evaluation.
Literature highlights multiple types of creativity testing for different contexts such as creativity tests of young and adults, testing of products, different question patterns of design tests in classroom, etc. A lacuna has been investigated from the evidence of literature that there is less focus on the ways of capturing creativity in the design entrance exams. There is also a lack of assurance in literature, whether optimized creativity is captured based on all its factors and requirements of the design schools. Further, hardly studies focused on proposing a standardized format of testing creativity for entrance examinations in design schools. This leads to the following questions: What factors in questions can instigate creative responses among students? Further, do Design tests in India confirms the systematic identification of creative questions from a bunch of other non-creative questions?
Creative question is a significant component in Design education that attempts to trigger creativity in students. Questioning is a medium by which pedagogues confirm learning and recalling ability of students (Baloche & Platt, 1993). It triggers creative and critical thinking in students. There are multiple types of questions by which teachers try to capture learning and knowledge from students (Aziza, 2018). Creative questioning is an art and science that invokes creativity in students (Zolfaghari et al., 2011). All questions are not creative, and a way of assessing quality of a question is the divergent creative responses acquired from students.
However, it is debatable that quality of a student determines the degree of creativity in a response. Further, in a mass examination, assessment of quality of questions is essential prior to it is delivered to students.
This research is intended towards the direction of identifying creative features of questions that instigate creative responses. This study is not focused on identifying features of creative questions through peer-review techniques, observations, or individual subjective assessment.
The aim of this study is to find out features of creative questions from experts in this field by mixed-method research techniques and further digitize identification of creative questions to achieve a consistent identification process. This study informs examiners whether a question
framed by them is creative or not, which further assists pedagogues in decision-making of reformulating questions, as illustrated in Figure 2.1.
Figure 2.1: Overview of the model
Features of creative questioning (f1, f2,…, fn) were acquired from pedagogues associated with Design education experienced in formulating creative questions. A network was built using Deep Learning (DL) techniques capable of identifying creativity based on those features.
Questions framed by examiners serve as input to the network, categorizing it as creative and non-creative. A question that turns out to be non-creative assists in decision-making of examiners that reformulating a question is essential to turn it into a creative one. However, this research is restricted to the decision of whether a reformulation of a question is required or not.
Human-engagement is essential in this context to reframe any question. This study does not focus on assisting examiners on how a reformulation can be done in order to make a question creative.
The investigation in this chapter attempts to address the research gaps highlighted and reported in the state-of-the-art literature review presented in subsection 1.5.1, subsequently corresponding research questions and objectives reported in section 1.9 and 1.12. The research questions and the objectives are stated below again for reference.
RQ1: What are the features of a question that triggers creative responses?
RQ2: How to identify a creative question from a set of non-creative questions?
Objective 1: To identify questions that has the potential to instigate creative responses among students.
Objective 2: To identify variables of questions that has the potential to instigate creative responses among students.
Objective 3: To design a digitized system to identify creative questions that has the potential to instigate creative responses among students.