Chapter 1. Introduction 1
1.5 State of the art review
1.5.3 Novelty in images or sketches
ii. Majority of the studies have demonstrated measuring novelty based on its parameters.
But, hardly any study illustrated the procedure of identifying and acquiring these parameters that can be used in digitized novelty evaluation of descriptive creative aptitude in Design education.
iii. Parameters of novelty are domain-specific, and hardly any studies focus on features of novelty associated with evaluating descriptive creative aptitude of students in Design education.
iv. Majority of the articles have shown evaluating novelty of textual documents and their associated models or architectures. But, there were hardly any models or architectures related to digitized evaluation of novelty of students illustrating descriptive creative aptitude in Design education.
base for learning. Studies using these concepts illustrate images in the form of embeddings and neural networks. Deep Learning (DL) encompasses images through simultaneously interconnected nodes to distribute weights. A loss in this type of network adjusts embeddings to maintain a distance in embedding space between similar and dissimilar class labels (Chopra et al., 2005; Guillaumin et al., 2009; Hadsell et al., 2006). However, there might be circumstances that require “out-of-distribution” data during training of network for any unexplored context. Investigating feature of novelty assessment for a new context is essential to decently train a network for accurate prediction.
Masana et al. (2018) recommended a modified loss function by introducing a rule that assigns zero to an image when it is an “out-of-distribution” data, otherwise one. They utilized various datasets such as Tsinghua, Modified National Institute of Standards and Technology (MNIST), Street View House Numbers (SVHN), Canadian Institute For Advanced Research (CIFAR-10) for capturing “out-of-distribution” data and further training the models. Their modified function considers at least a single image as “out-of-distribution,” and further computes a distance with other similar classes (Masana et al., 2018). However, datasets were too disparate in contrast with solutions related to Design assessments. The investigation intends to optimize loss function to accommodate “out-of-distribution” data, but results were hardly utilized for scoring of novelty of images.
Salehi et al. (2020) reported a novelty detection method for medical images. Numerous datasets were utilized such as medical data of haemorrhage and Magnetic Resonance Imaging (MRI), MNIST, Fashion-MNIST, CIFAR-10, and Columbia Object Image Library (COIL-100) in U- Net model (Network appears similar to letter U) with Deep Convolution Generative Adversarial Network implemented in subsequent networks (Salehi et al., 2020). Many a time, images are marked with labels or text descriptions embedded in them. There is a dearth of specifications related with handling variations of image datasets, such as images with labels embedded in them or images with annotations. The investigation highlighted optimizing efficiency and ablation studies; however, the results were domain-specific and associated with clinical studies. This further triggers the difference in features and results of novelty evaluation in Design education with any other fields.
Majority of the studies in literature reported novelty detection of medical images, where rare clinical conditions were identified and investigated. Clinical images are inclined to
vulnerabilities, which might be because of seriousness of sickness, natural elements, and so forth, where irregularity and novelty detection is essential (Reinhold et al., 2020). Therefore, it is highly challenging to determine novelty in medical imagery associated with any disease or abnormality prediction where unsupervised and transfer learning techniques are essential. It is difficult to acquire labelled datasets for various new and infrequent diseases. Moreover, prediction of novel images in clinical domain requires higher accuracy (Szymkowski et al., 2020) as it deals with mainly real-time data generated from patients. But, clinical datasets and prediction of novelty in disease is different from evaluation of novelty in image-based creative responses. Features of novelty of clinical data and students’ responses are different and responses mostly requires scoring mechanism.
Many studies were focused on novelty detection of planetary images (Bonnici et al., 2010;
Kerner et al., 2019, 2020; Sintini & Kunze, 2020; Stefanuk et al., 2020). Multiple computational algorithms were compared such as autoencoders, generative adversarial networks, reed xiaoli detectors, and principal component analysis and further attempted to investigate the optimized method. However, there exists a difference in pattern of images between planetary objects and creative responses in Design, which restricted using any of those datasets or learning outcomes in future investigation. This genre of studies is hardly associated with novelty evaluation process in academics; therefore, features considered in those studies are distinct from Design education.
Wachs et al. (2018) reported a statistical learning technique for assessing novelty in graphic design shared over a network. Novelty was evaluated by contrasting features of current images with ones that were created earlier. As images were shared over network, user demographics and multiple network features such as number of followers, numbers of user unfollowed, number of duplicate outgoing connections, distance between nodes, and interpersonal ties were considered. Multiple image features were considered in this study, such as compositional features comprising aesthetics and colour and inception features involving contents of an image (Wachs et al., 2018). However, network contents are quite dissimilar from solutions in examination as novelty of an answer in mass examination isn’t judged by the number of visits, hits, or followers. Generally, competitive examination focuses on problem solving (Amini et al., 2019); therefore, relevance between problem and its response is a fundamental aspect that pedagogues often look out for.
Few articles from literature focussed on proposing novelty detection method for a heterogeneous dataset consisting of a combination of text and image. Amarbayasgalan et al.
(2018) highlighted a mechanism for measuring novelty using Common Objects in Context (COCO) dataset presented in a temporal window, which involved any time span of the network.
Mask-Region Convolution Neural Network (Mask-RCNN) was used to combine text and image being predicted by machine learning models. Text and images are converted into their numerical vector representations by utilizing autoencoders (Amarbayasgalan et al., 2018).
Further, novelty is evaluated using unsupervised algorithms (Amorim et al., 2019). The methodology was proposed for social media, which might not require any scoring function for novelty assessment.
Literature studies suggested novelty as a conceptual shift in design. A conceptual shift of a sketch involving a concept can be related to another concept. It is a significant procedure in Design education as it is engaged in evaluating a design from multiple perspectives and identify uniqueness in a response. Karmimi et al. (2019) proposed two parameters to evaluate novelty- 1) visual similarity and 2) conceptual similarity. Visual similarity was measured using QuickDraw dataset, where vector representations of visual sketches were computed. Further, feature vectors were clustered based on their visual similarity. Google News corpus was used to evaluate conceptual similarity. Word embeddings were created, and subsequently cosine similarity function was used to measure distance between concepts (Karimi et al., 2019).
Assessment of creative responses in Design in a general context is different from examination on a large scale. Pedagogues initially intend to search for a relevant response (Abacha &
Demner-Fushman, 2019; Esposito et al., 2020) compare it with other responses to evaluate novelty. Relevance is highly prioritized in examination because a response might possess aesthetic attributes, but it must meet the requirements of the question.
Literature highlights novelty detection by applying various classification algorithms (Désir et al., 2013; Schölkopf et al., 2001); however, it is challenging to acquire novel dataset as it is associated with newness of a concept. Bodesheim et al. (2015) proposed a shift of concept for every novel response. Similar to human evaluation strategy, irrelevant responses were discarded before scoring novelty. Further, each sample's local learning model was framed using the K-nearest neighbour algorithm. A response was considered novel if it was at a distance from its nearest neighbours (Bodesheim et al., 2015). However, datasets considered in the article were purely image-based solutions, and there were no case studies related to a
combination of sketch and textual descriptions. Further, the study described evaluation of novelty for a generalized context; however, for a specific context such as evaluation for mass examination in Design based educational assessment, it is required to define context-specific parameters for evaluation.
Linder et al. (2013) highlighted procedures to assess novelty from Googles’ list items of images. Initially, a sample of 3579 images was acquired, and search queries for sample were created algorithmically. Queries were utilized to perform search of images and search results were captured. A function was derived that can measure novelty by inversing the number of search results generated by an image (Linder et al., 2013). This mechanism was coherent but hard to correlate with dimensions of examination. Though search techniques on the web might be applied in a few cases, such as design registration or design exhibition, evaluation associated with mass examination requires comparing and contrasting ideas within a cohort of responses.
Further, assessment in Design based entrance examination is associated with creative responses that are an amalgamation of visual and textual content. The study did not highlight possible categories of image-based pattern of creative responses. Moreover, the function defined was restricted to image-based responses over web and was not adapted to solution found in other media.
The insights of this section of literature review are as follows:
i. Novelty is measured quantitatively across various fields, but hardly any study highlighted digitized evaluation of novelty in responses illustrating image-based creative aptitude.
ii. Majority of the studies have demonstrated measuring novelty based on various parameters, however, the procedure of identifying and extracting these parameters were not categorically specified and detailed.
iii. Parameters of evaluating novelty are domain-specific, but hardly any studies focussed on features of novelty specifically associated with image-based creative aptitude of students specifically for Design or creative specialization domain.