• Tidak ada hasil yang ditemukan

Mixed-Reality Research at the University of Essex

9.4 Systems to Support Co-creative Collaboration in Mixed-Reality Environments

9.4.2 Mixed-Reality Research at the University of Essex

Traditional or blended coursework utilizing the MiRTLE platform for remote students had learning outcomes similar to those who attended the course in the physical classroom. From the anecdotal evidence collected, we believe the effec- tiveness of MiRTLE lies in the level of presence the learner has within the virtual classroom environment, the ease with which the participants adopt the technology and the relative ease that learners using this platform have in interacting with other people in the lecture hall.

A simple unifying observation can be made for the tests conducted at Saint Paul to date. When implementing any system of technologically supported learning, care should be taken to consider the overall course delivery mechanisms, systematically recognizing as key system components the nature of the platform, the structure of the pedagogy and the capacity of both students and instructors to adjust their respective approaches to learning and instruction in the new environment.

into the MiRTLE learning space. The work also aims to enhance the collaborative learning processes through the use of a variety of different user interfaces such as tablets and immersive glasses to enable a greater level of collaboration and com- munication between a range of users that include expert and novice students.

A similar approach is described in Dede et al. (Chap.8) where a range of research questions for comparisons between a VR and“Classic”version of an educational simulation are presented together with some generalizable design heuristics for blending MUVE-based curricula with head-mounted display devices.

A new platform has been developed (dubbed MiRTLE+), which enables a group of four learners with different levels of expertise (two novices and two experts) to collaborate on the same task at the same time. For the purposes of this initial study, the learning task was chosen to be a commonly used card game called UNO.

The UNO card game was chosen as a learning task because of its relative simplicity and due to the fact that card games are commonly learnt by practicing with experts rather than reading from books or manuals.

In this scenario, the players work in pairs with the intention that the novice player can learn the game and associated playing strategies by observing their partners game play. The novice will also learn the game rules by communicating with their team member (the expert). Based on this learning activity, we have divided the group into four possible scenarios. This division is based on two factors;

the students’ location (real or virtual world) and their level of expertise (expert (E) or novice (N)). This is used to determine whether their location (in the real or virtual space) has an affect on the learning outcomes and their overall performance.

Figure9.7illustrates these different evaluation scenarios.

Fig. 9.7 MiRTLE+ evaluation scenarios

The iClassroom smart environment supports this interaction by using a range of augmented and mixed reality collaboration tools. This forms the basis for quanti- tative and qualitative evaluation studies that measure the learning effectiveness of the new mixed-reality environment. With regard to the evaluation techniques, the evaluation of the students’ sense of presence includes the use of a subjective questionnaire that have been customized and extended based on the Presence Questionnaire (PQ) developed by Witmer and Singer (1998).

In the first phase (see Fig.9.8), where students used tablet computers (for an augmented reality view) and PC screens (for a mixed reality view), the initial results showed that the immersive platform was more effective in promoting the novices’ knowledge of the basic and strategic rules for UNO, than novices who played and learnt using a traditional web-based approach. The analysis also indicated that there were statistically significant differences in the levels of presence between the groups: which included a control group, a web group and the experimental group.

In the second phase of the evaluation, the augmented view within the tablet was replaced with an augmented reality headset (MetaOne) and the virtual world view with an immersive Oculus Rift headset which is used by the remote learners.

Figure9.9presents the results from the evaluation of the student’s sense of pres- ence between the main groups. This clearly shows that the face-to-face control group had the highest score, followed by the phase 2 MiRTLE+ system, then the phase 1 MiRTLE+ system andfinally the web-based group. These results clearly indicate that the more immersive technologies had a positive impact on the stu- dent’s sense of presence when participating in the learning activities.

In both phases of the evaluation the overall findings are that the learning effectiveness and the students levels of engagement in the learning activity were significantly higher when using the augmented reality and immersive devices when compared to the web-based group. This is illustrated in Fig.9.10, which shows a comparison of the learning effectiveness of the phase 1 and 2 MiRTLE+ platforms with a web-based group and a control group (which met face to face). However, as expected, overall the students who worked together face-to-face performed the highest. This probably indicates that the augmented and immersive technologies

Fig. 9.8 MiRTLE+ augmented and virtual worlds

still have some way to go in order to fulfill their potential, but there are clearly some advantages when compared to less immersive web-based interfaces.

9.4.2.2 Mixed-Reality Co-creativeMaker-Spaces

We believe the next generation of e-learning can bring with it a fundamental shift towards informal, interactive and business driven learning, away from the more Fig. 9.9 Measuring presence between groups

Fig. 9.10 Comparing learning effectiveness between groups

traditional formal e-learning approaches. This will be accompanied by a host of opportunities and challenges for both the learners themselves and the institutions delivering the courses.

Many successful examples exist of project-based collaborations built by multi-disciplinary student teams that transcend this. For example, children in coding clubs (e.g. coderdojo.com) mentor each other using programs like Scratch and have subsequently self-organized to build teams to compete in robotics competitions.

Undergraduates in engineering courses collaborate with business students to create new product prototypes; often these projects emerge from undergraduate work and evolve towards competitive programs, and may then be reinvented as postgraduate projects.

Maker spaces are a type of non-formal learning environment more active in technology orientated, and other more informal creative and innovative tasks. These workplaces have the potential to bring together people with common interests in technology, computer science and digital art to socialize and work. Typically all of the participants will have the common motivation for creativity, commitment to innovation, openness to collaboration and the promotion of experiences of learning by doing (rather than being required to use the space as part of a formal learning task). Here, new and flexible concepts of virtual laboratories and virtual experi- ments offer an appropriate platform for education and communication. Examples of currentMaker spaceinitiatives in Europe include Madrid (e.g.medialab-prado.es), Barcelona (e.g. vailets-hacklab.org, codeclubcat.org), Toulouse (e.g. tetalab.org), Lisbon (e.g.fablablisboa.pt) and Porto (e.g.opolab.com).

In the last decade a number of fully software based virtual laboratories in dif- ferentfields have been developed. In most cases they are specific to an educational context and do not offer possibilities for generalization to a platform applicable to a wider class of disciplines (Potkonjak et al.,2016). This has led to the emerging idea of using virtual environments that enable exploration of new challenges and con- ceptualisation of solutions through collaboration. This Maker space concept includes a range of creative, design and end user engagement toolkits as well as models enabling the use of a range of scientific tools, which could be composed and implemented in an appropriate virtual laboratory. For example at the University of Essex we have developed the ‘BReal Lab’ (Peña-Rios, Callaghan, Gardner, &

Alhaddad, 2015—see Fig.9.11) which provides an innovative approach to enabling geographically distributed learners to collaborate around physical engi- neering laboratory activities, such as building robots. This mixed-reality space allows students to share both virtual and physical electronic components in order to jointly create new robotic applications (as distinct from how MiRTLE used audio and video as a mixed-reality medium). It facilitates the combination of software and hardware components, created by different students in different parts of the world, into a single working system. The BReal Lab has been successfully tested with a range of groups of dispersed learners and the preliminary results (Peña-Rios et al., 2015) showed a positive learner’s experience opening up new opportunities for online education.

9.4.2.3 Virtual Observation Lens

There are many issues involved in creating meaningful collaborative learning tasks, including task allocation, communication, evaluation and assessment. Working with peers is not just a matter of gathering people together; successful collaboration depends on who the members are, what the tasks are and how the members are assessed individually and as a group. Learners usually acquire new knowledge while practicing learning activities. Consequently, rather than looking at thefinal product as evidence of learning, instructors should assess the whole process.

However, collecting data to trace the students’ behaviors in 3D virtual environ- ments is typically more challenging than in face-to-face traditional learning sessions (Chiang & Schallert,2012).

Several issues can arise in assessing a group of learners in these environments.

First observing the users’behavior dynamically and collecting evidence of learning can be challenging when using a virtual world. The main problem is the abundance of data generated by the VW. Secondly numerous skills, such as communication and negotiation skills could be gained by students when working on collaborative activities, however often it is difficult to automatically detect this type of evidence in these spaces. Third labeling and recognizing the evidence of individual users in Fig. 9.11 BREAL mixed-realitymaker space

real-time is particularly difficult where several learners are contributing at the same time, which makes automatically tracking performance difficult (however, this can also be true in real-world settings). Some of these issues were highlighted by Gardner and Elliott (2014), in which we found that “learning within technology creates a pedagogical shift that requires teachers to think about measuring outcomes in non-traditional ways”.

Shute et al. in Chap.5examines the role of diagnostic assessment in immersive learning environments by introducing the concept of stealth assessment, and pro- vides an example of stealth assessment within a game environment. Our research (Felemban, Gardner, & Callaghan, 2016) is also exploring stealth assessment mechanisms but within a multi-user environment. We are aiming to create a novel computational framework that enhances the observation and recording of collab- orative learning activities in order to evaluate the individual and group learning outcomes. In particular, the aim is to create a virtual observation model that can map between observing and evaluating students in physical settings with observing and assessing them in virtual worlds. The focus is more on providing methods to identify and classify learning evidence and assess group working than mapping all these elements to specific learning outcomes. To do so, we use an agent-based approach to support the recording and labeling of learning evidence from virtual activities and therefore simulate the observations normally made by a teacher.

At this time, the BReal Lab (see above) has been extended to provide a number of tools to address these issues. The platform utilizes techniques from multi-agent systems to track users’ actions and predict the learners’ acquired skills and knowledge. It has two different kinds of agents: software agents and natural agents.

The software agents track learners and collect different users’clicks and actions, while the natural agents allow users to perform peer evaluations of each other to evaluate the quality of their performance. When the students are taking part in learning activities, they will be asked to act as agents and evaluate their peers via a sliding rating scale. A key issue will be to avoid any unnecessary disruption or cognitive overhead whilst the students are doing this. These agents are employed to record both implicit and explicit data that will be analyzed to determine the learning evidence and individual students’performance. The agents will communicate and work together in real time to collect this evidence, which will then be analyzed by a computational model based on the use of fuzzy logic. The intention is that this model will fully describe and map the characteristics of different learning objectives and will provide a mechanism to map between in-world events and individual learning achievements (based on the underlying data) through inferences derived from the fuzzy rule set.

The central part of this research is theobservation lensmodel that will determine how to analyze the data that is captured by the agents. In normal practice in order to observe students in the classroom, educators should consider numerous criteria to gain an insight into the students’learning processes and outcomes. As a starting point the work of Borich (2016) has been taken to create a virtual observation model (see Fig.9.12), which can be applied to the collaborative virtual world in order to evaluate the learning activities taking place. The virtual observation model

defines different levels of granularity for observing students and recording evidence of collaborative learning, from high to low-level observations.

Starting at the lowest level, thefirst level of the hierarchy is theEvents Detection lens that simulates an instructor who is watching a collaborative activity from a general viewpoint without looking deeply into the content of what is happening. In this level, the observer monitors the activity by understanding that a sequence of events is occurring and capturing these events without judging the meaning of the actions. It is envisaged that examples of the events that can be observed and collected from the virtual world activities will include avatar event logs, chat logs, object event logs and student self-rating logs.

The second level is the Learning Interactions lens, which considers a deeper view of the social and environmental interactions. The social interactions are between peers, and the environmental interactions are between students and the virtual world. Evaluating the quality and quantity of collaborations and interactions infers whether the learners have valuable interactions and if they are active learners in their groups. It also determines the amount of sharing and interaction among students. It is possible to infer the quantity and the quality of learners’interactions by creating fuzzy rules based on the teachers’viewpoint. Examples of these rules are:

• The number of a learner’s contributions in the chat log during a period com- pared with other learners.

• The number of a learner’s contributions in using the virtual objects during a period compared with other learners.

• The number of a learner’s contributions in completing tasks during a period compared with other learners.

• The rating scores for a student from other members in a given period.

The third level is theStudents’Success lens. It represents the actions taken by teachers when they are observing students’ success by counting the number of correct answers; the number of right answers reinforced or acknowledged events, and the number of delayed corrections.

Fig. 9.12 Observation lens

The fourth level is the Performance Outcomes lens. This level simulates the observer tracking students in depth to identify the skills and knowledge that they have acquired from the learning activities. Because our research focuses mainly on educating learners on technical subjects using collaborative learning activities, students can acquire both collaborative and technical skills based on the activity they are doing. Thus, this lens should help the virtual observer assess the knowl- edge and skills of the individual learners.

Thefirst two layers of the observation lenses and the mechanisms to assess how best to translate teacher generated observation rules into descriptive logical rules are currently being developed. The plan is to determine the observing rules that should be created and when the rules should be activated. This will require the creation of a fuzzy model, which will be validated through a series of expert evaluations. In terms of assessing this approach for collecting learning evidence, the intention is to assess how much data should be collected to yield sufficient evidence of learning, and how much should be collected from natural agents and how much from software agents.

In this work a determination will need to be made as to how different types of data can be automatically grouped to demonstrate meaningful evidence of learning.