Chapter 1. Introduction 1
2.3 Results and discussion
2.3.1 Mixed-method research results
Features that were common in Google quest data and outcome of mixed-method analysis were included in this study. Few of the additional features included from Google quest data were
‘question_definition’, ‘question_entity’, ‘question_spelling’, ‘question_expect_short _answer’, ‘question_interest_self’, and ‘question_choice_type’. Interviewees shared a common opinion of the features as found in Google quest. Further, BERT pre-trained model was implemented where features of it were mapped with the features extracted by qualitative analysis. This model identified the following features-‘question_verify_intent’, ’question_
communicational’, ’question_expect_short_answer’,’question_seek_fact’,’question_novel_
answer’,’question_interest_others’,’question_interest_self’,’question_multi_interpretation’,’q uestion_verify’,’question_seek_opinion’,’question_choice_type’,’question_compare_type’,’q uestion_consequence_action’,’question_definition’,’question_entity’,’question_instructions’,’
question_procedure’, ’question_seek_reason’, ’question_spelling’, ’question_well_written’
‘question_ subjectivity’, and ‘question_polarity’ and classify it as creative. This model attempted to solve the research question of identifying a creative question from a bunch of other questions. The interpretation of the features is shown in Table 2.3.
Table 2.3: Interpretation of features
Features Interpretation
‘question_verify_intent’ Whether rational of a question needs to be verified
‘question_communicational’ Whether a question is conversational
‘question_expect_short_answer’ Whether a question expects short solution
‘question_seek_fact’ Whether a question expects factual solution
‘question_novel_answer’ Whether a question expects novelty in solution
‘question_interest_others’ Whether a question feels interesting to others
‘question_interest_self’ Whether a questions feels interesting to self
‘question_multi_interpretation’ Whether a question propagates multiple interpretation across all respondents
‘question_verify’ Whether a question can really be reported as a creative question
‘question_seek_opinion’ Whether a question expects opinion from respondents
‘question_choice_type’ Whether a question is objective or subjective
‘question_compare_type’ Whether a question expects alternative of solutions
‘question_consequence_action’ Whether a question expects consequences of any particular action(s)
‘question_definition’ Whether a question expects recall of information
‘question_entity’ Whether a question is associated with any particular object or product
‘question_instructions’ Whether a question expects instruction
‘question_procedure’ Whether a question expects procedural solution
‘question_seek_reason’ Whether a question expects well-explained solution
‘question_spelling’ Whether a question seeks spell-check
‘question_well_written’ Whether a question is narrated well
‘question_subjectivity’ Whether a question is based on one’s opinion, experience, and preference
‘question_polarity’ Whether a question is negative, positive, or neutral in terms of interpretation
The frequency analysis depicted in Figure 2.7 illustrates the repetition of features by multiple experts. The x-axis represents cluster labels, and the y-axis shows the number of times the open codes were repeated against each cluster. The colour codes represent the number of subjects from whom the qualitative data was acquired. Firstly, variant terms of open-ended questions were repeated by ten subjects. The frequency of the specification of variants of open-ended questions by the subjects are as follows: 13 times: subject1, 3 times: subject2, 7 times: subject3, 9 times: subject4, 8 times: subject5, 2 times: subject6, 2 times: subject7, 5 times: subject8, 3
times: subject9, and 2 times: subject10, respectively. Similarly, this pattern is followed for other features of creative questions that are illustrated in the graph. Since the frequency is generated from qualitative data, so inclusion of features to the model is dependent not only on frequency but also on affirmative response to add the particular feature. For example, ‘degree of creativity’ has got a very low frequency, and most of the experts didn’t knew or were unsure of exactly how to define it in the context of a question. So this feature was not included for two reasons, viz., low frequency and ambiguous response received from experts during interview.
Similarly, ‘communicative question’, which represents any form of communication in a question showed low frequency. However, it was observed that question papers associated with nationalized examinations were conversational (Baron, 2018; Hargie, 2006; Holmes et al., 2017). Therefore, this feature was considered as input to the model. The frequency of ‘qualities of examiner’ was relatively moderate; however, this study was specifically focused on attributes of question, the characteristics of examiners were not studied. It would require multiple psychological tests to be included and further relate to creative questions. All other features have relatively moderate to higher frequency and contextually eligible to be included in the study.
Figure 2.7: Frequency analysis of features evolved from coding
0 10 20 30 40 50 60 70
Open-ended questions Subjective questions Application-oriented questions Intent understandable of questions Communicative questions Factual questions Procedure-seeking questions Seeking uncommon answers Other interested in question Verify creative questions Opinion-seeking questions Comparison of alternative of… Seeking consequences of actions Seeking well-explained solutions Question interpretation Narration of questions Qualities of examiner Degree of creativity
Subject10 Subject9 Subject8 Subject7 Subject6 Subject5 Subject4 Subject3 Subject2 Subject1
The major focus of this study was to identify creative questions from a bunch of non-creative questions. Creative question attempts to trigger creativity in students. The degree of creativity invoked in a response depends on the level of skill and creative thought process imbibed in students. Therefore, this study does not focus on student’s response in order to identify creative question. It highlights factors acquired from experts using which they formulate creative questions to trigger students' creativity in mass examinations. Further, the model designed for this study would assist examination paper setters to verify whether the questions formulated by them are creative or not. Though examination paper setters in Design education possess experience in framing creative questions, but in some cases, it might lead to mistakes while formulating on a large scale. Self-bias is a major factor that might occur in formulating creative questions. To overcome these, this study would enable human-machine engagement to optimize decision-making of examination paper setters. This might lead to an iterative process, resulting in formulating creative questions as illustrated in Figure 2.8. Humans framing questions would serve as input to the model, which assists in predicting creative questions. If a question is predicted as creative, then it requires no further processing. A question predicted as not creative receives immediate decision of reformulating a question and thereby reducing human dilemma. This process iterates until a question is predicted as creative.
Figure 2.8: Human-machine engagement to formulate creative questions
There was another aspect of studying creative questions where one might be inquisitive about the degree of creativity. Literature highlights a study of degree of creativity of products (Sarkar
& Chakrabarti, 2011). While acquiring qualitative data by semi-structured interviews, a question associated with degree of creativity of the questions in examination was asked.
However, there was no appropriate response from experts in this field. The typical responses were “Ahhhhh………I am not sure about the level of creativity.”, “We do not assess the degree of creativity in a question. Sometimes a question can be relatively critical but again criticality
is not creativity.”, “Sometimes I provide them with more constraints, but I am not sure about degree of creativity.”, “I don’t know the simplest of the things that we put it up and the problems of the questions and what for like a we do at the beginning of degree can be expressed like you feel it has very little creativity or you feel like if your personal goal in life is very high creative. So how can we afford that? I think it is very difficult to assess that. How much creative it is very subjective.”, “Aaaa….. I think its difficult to judge the level of creativity because I still feel it is difficult to put creativity into category; that’s my personal opinion.” This outcome might have two interpretations- Firstly, paper setters in Design education do not formulate questions from the perspective of the degree of creativity, or secondly, it is a grey area that requires intensive research to investigate this problem.