DOI: 10.4018/978-1-7998-1461-0.ch005
ABSTRACT
Of all the technologies emerging today, augmented reality (AR) stands to be one of, if not the, most trans- formational in the way we teach our students across the spectrum of age groups and subject matter. The authors propose “best practices” that allow the educator to use AR as a tool that will not only teach the processes of a skill but will also encourage students to use AR as a motivational tool that allows them to discover, explore, and perform work beyond what is capable with this revolutionary device. Finally, the authors provide and explore the artificial intelligence (AI) processors behind the technologies driving down cost while driving up the quality of AR and how this new field of computer science is transform- ing all facets of society and may end up changing pedagogy more profoundly than anything before it.
INTRODUCTION
Mixed Reality (MR) the cousin to Virtual Reality (VR), is starting to gain a foothold in today’s tech- nological ecosystem. In (Penland, Laviers, Bassham and Nnochiri 2018), the use of Virtual Reality for distance learning was demonstrated on a small scale however VR while being more immersive, does not integrate with the user’s environment and therefore makes it difficult to teach students with a tangible example of the subject matter. Mixed Reality (MR) is used as an independent concept or to classify the spectrum of reality technologies, as referenced in reality virtuality continuum 1994; 2007). As an independent concept, MR combines the best of both virtual reality and augmented reality. When used to classify the larger scope of reality technologies, it refers to the coverage of all possible variations and compositions of real and virtual objects.
Perceptions and New Realities for the 21st Century Learner
Jennifer (Jenny) L. Penland Shepherd University, USA
Kennard Laviers Sul Ross State University, USA
Perceptions and New Realities for the 21st Century Learner
This type of connectivity has now reached the pinnacle were technology has emerged both the quality and cost to a practical level. While this is fantastic, allowing someone to engage in a task totally unfa- miliar to them such as, rebuilding a carburetor, as a pedagogical medium we propose a note of caution and suggest prior instruction with this as a continued practice strategy. If the AR or Mixed Reality can take them step-by-step through a process, we can make an argument that the students will not find it insignificant to remember or learn the process because they don’t have to, the AR will do it for them (Callaghan, Gardner & Davies, 2008). In this chapter, we will explore various ways for AR to be used as a pedagogical tool and propose methods to avoid letting the student side-step the learning process.
Over time, it is likely that only a few adaptive learning software packages will prevail. Hopefully, software vendors not controlled by very large universities or companies will choose to share how their algorithms work. We have learned enough about how people learn to know that not everyone learns the same way. Beyond the seven learning styles (visual, aural, verbal, physical, logical, social, and solitary) with which many educators are familiar, modern technologies are enabling researchers to determine there may be more. In fact, one recent book by David Schwartz, Jessica Tsang, and Kristen Blair (2016), “The ABCs of How We Learn”, identifies 26 unique learning styles. As datasets of learners’ activities increase and algorithms improve their abilities to discern different styles, this higher number will likely increase.
Sophisticated software increases the potential to tease out the most effective way to help each person learn. The weakness of today’s educational system is that we often teach to the average, excluding learn- ers on the upper and lower edges with a Bell Curve focus (Herrnstein & Murray, 1994). A learner who conforms survives, while non-conformers do not. As colleges, universities, and corporations develop and refine stronger adaptive learning algorithms, I hope they avoid the bias toward conformity.
As we embrace adaptive learning software, we have to make sure that we choose learning algorithms that work to the learners’ strengths instead of forcing them to adapt to a norm. In the end, we lose if we are all coached to think alike. One of the surest signs that a technology trigger is starting its roller-coaster ride through the Gartner’s “Hype cycle of innovation” is when the name we all call that trigger becomes a part of the public lexicon (2014).
The Technology Explained
Augmented Reality (AR) and Mixed Reality (MR) are often used synonymously, however, some sepa- rate the two terms to mean slightly different things. We choose to use the two as the same. While it is counter-intuitive to envision, AR is actually much more sophisticated and difficult to implement then VR. With VR, the hardware and software do not need to keep track of the real world that the user is in whereas in AR not only does it need to track the real world, it also needs to understand what it observes in the real world and translate that to the software so the simulation can be matched with the world. Until most recently, this process was just not fast enough and there was a big delay in the simulation updating the movements of the user in the simulation and often the simulation would get out of synch with the world cause uncomfortable jarring in the experience for the user. Part of the solution to this problem is generating an immediate modality for the computer to understand items and features in the real world that could be used to track movements and places of interest pertaining to the application in us (Penland and Laviers, 2018). In order to accomplish this, developers turned to artificial intelligence and hardware implementations of complex algorithms that take too much time.
95 Perceptions and New Realities for the 21st Century Learner
Artificial intelligence (AI) has been around since the early days of computers but until recently it was used only as a mildly helpful tool to perform simple tasks such as voice and image recognition. Often probability theory would work better to accomplish AI tasks then the more unusual neural networks that are truer to form as an AI technique. Neural networks simulate how the brain processes information but have the problem of being more like a black box. In other words, they learn how to recognize patterns but we can’t really examine what took place inside the network to understand how the network learned what it did and therefore researchers and developers have been hesitant to use them in commercial ap- plications. The reason neural networks have been overlooked since their discovery in 1943, when Walter Pitts and Warren McCulloch wrote a program that modeled the human brain in a very simple sense with few neurons. The problem with these neural networks was very simple, computers were simply too slow.
A neural network can be thought of as in input array (think in terms of pixels of an image), an output array (think in terms of the name of a face on a picture) and some hidden (middle) layers in-between the input layer and the output layer. The more middle layers between the input and output, the better the AI does its job and the more abstract concepts are that the neural network can learn. The problem that arises is that every layer that is added increases the complexity of the processing exponentially. In the computer world this is a very bad thing and for many years it was a huge obstacle to overcome and therefore neural networks were giving very little attention. This was true until another technology grew from the gaming industry, video cards and Graphics Processing Units (GPUs).
GPUs started from the ground up to perform fairly simple and low-resolution computational tasks with a high degree of parallelism. That is, instead of one or a few very powerful and high-resolution computer processor cores, the GPU would have hundreds or even thousands of cores that could all work simultaneously. This made the practicality of deep neural networks a reality and soon enough this technology allowed researched to beat the world’s best Go player for the first time, a feat some scientist thought was impossible. Soon after, companies across the world started a race to implement AI in their core business practices Abramovich and Horowitz (2018) and country after country started programs to further research in AI starting something of an AI space race among nations (Gershgorn, 2018).
These advances have lead companies such as Microsoft, Apple, Intel and others to start developing AI processors that could run these parallel computations with very little power and more importantly in real-time. This technology has finally opened up the world of Augmented and Mixed reality to consum- ers, businesses and educational institutions.
Apple and what they did with the new technology provide the best example of this use of extra AR and AI processors. In order to bring AR to their devices, as mentioned above, they developed a whole new chip that they called their neural engine. This chip is similar to the GPU mentioned above but it is even more specifically designed to process neurons in a neural network in parallel. This processor allows very complex computation on the devices with very low power usage making it relatively practi- cal to run Augmented Reality tasks on the very power constrained, computationally limited phones and tablets. This chip is used for their facial recognition component as well as other neat features on the phones. However, it is important to note that this is only part of the story. In addition to the hardware introduced, Apple also introduced their AR Kit, which is a handy framework for software developers can use to easily make AR based applications. Arguably, it is this spirit of producing more and more tools, frameworks and even programming languages that is driving the explosion of AR, VR, and MR in addition to driving the explosion of Artificial Intelligence in general. (Gershgorn, 2017)
Perceptions and New Realities for the 21st Century Learner
As colleges, universities, and corporations develop and refine, an actual example of a virtual world used for education is NASA’s simulator. This very complex simulator provides space simulation sce- narios and is used for training next-generation astronauts (BBC, 2008). Other simulators, (many based on virtual worlds from computer games) are already in common use training people in high-risk or stressful occupations, (e.g. surgeons, soldiers). By designing their own computer games young people can acquire Computer Science skills.
AR Devices Available
Microsoft announced the HaloLens in 2016 and it was truly amazing in the features it provided (Swartz, 2016). Laduma announced the HoloLens 2 which sees an increase in the field of view by two-fold (Pick- ersgill, 2019). The HoloLens 2 has a price point that is higher than many consumers or casual gamers may be interested in. This would suggest that Microsoft sees this entry as a viable tool for commercial or educational use.
Why Traditional Learning Suffers vs. VR, AR or Mixed?
Many of the higher institutions suffer a dedicated instruction system as well as a deficiency of personal interaction between the students and the instructors. The assessment methods applied by those institu- tions are usually outdated and cannot measure the learning goals adequately, this provides the student with a very little opportunity of utilizing their knowledge to solve real-life problems (Penland & Laviers, 2018). Students from a privileged educational background, as well as the students from disadvantaged educational backgrounds, usually enter higher educational institutions with differences in the skills and knowledge required for studying different disciplines (Penland & Laviers, 2018).
If students lack schedule flexibility, instructors’ availability, and vast interaction in a particular leaning system, it cannot be regarded as a VR, AR or mixed realities. VR/ AR reality learning can be regarded as mixed when the learners have frequent access to their instructors, both online and physically (Penland
& Laviers, 2018). Mixed Reality (MR) is any method of learning that uses technology to bridge the gap between students and instructors. The quality, frequency, and the quantity of communication between the instructors and students alone are not enough in VR, AR or Mixed Realities, but refining the learning experience of the student Penland & Laviers, 2018).
The most common problem faced by higher education institutions in adopting the Mixed Reality approach is the inadequate computer skills for the instructors. Some of the major challenges hindering the application whether it be VR, AR or MR technology in higher education include students’ restricted access to technological resources and lack of innovative methods from instructors. Integrating online materials via virtual with the traditional classroom provides a positive effect on students’ performance, enhances a flexible learning atmosphere, and ensures student autonomy (Sheehan, 2017).
97 Perceptions and New Realities for the 21st Century Learner
Machine Learning
Machine learning is an artificial intelligence (AI) discipline geared toward the technological develop- ment of human knowledge (Hurwitz & Kirsch, 2019). AI allows computers to handle new situations via analysis, self-training, observation and experience and is used in anti-virus and anti-spam software to improve detection of malicious software, spyware, adware etc. on your devices. (Hurwitz & Kirsch, 2019). AI is also changing the way vehicle systems are engineered and built. It is being used extensively in self-driving cars.
Why is machine learning important? Resurging interest in machine learning is due to the same fac- tors that have made data mining and Bayesian analysis more popular than ever. Things like growing volumes and varieties of available data, computational processing that is cheaper and more powerful, and affordable data storage. All of these things mean it is possible to quickly and automatically produce models that can analyze bigger, more complex data and deliver faster (such as from micro to macro or the universal placement theory) with more accurate results – even on a very large scale. And by building precise models, an organization has a better chance of identifying profitable opportunities – or avoiding unknown risks (Hurwitz & Kirsch, 2019).
Sift Features
Matching features across different images in a common problem in computer vision. When all images are similar in nature (same scale, orientation, etc.) simple corner detectors can work. But when you have images of different scales and rotations such as mountain ridge, aspen grove etc., one needs to use the Scale Invariant Feature Transform (Sinha, 2010). Why care about SIFT?
SIFT is not just scale invariant. You can change the following, and still get good results: Scale, Rota- tion, Illumination and Viewpoint.
SIFT is quite an involved algorithm and below is an outline of what happens in SIFT.
1. Constructing a scale space This is the initial preparation. You create internal representations of the original image to ensure scale invariance. This is done by generating a “scale space”.
2. LoG Approximation The Laplacian of Gaussian is great for finding interesting points (or key points) in an image.
3. Finding key points with the super-fast approximation, we now try to find key points. These are maxima and minima in the Difference of Gaussian image we calculate in step 2.
4. Get rid of bad key points Edges and low contrast regions are bad keypoints. Eliminating these makes the algorithm efficient and robust. A technique similar to the Harris Corner Detector is used here.
5. Assigning an orientation to the keypoints An orientation is calculated for each key point. Any further calculations are done relative to this orientation. This effectively cancels out the effect of orientation, making it rotation invariant.
6. Generate SIFT features Finally, with scale and rotation invariance in place, one more representation is generated. This helps uniquely identify features therefore, you can easily identify the feature you are looking for (Sinha, 2010).
Perceptions and New Realities for the 21st Century Learner
Stimulated NEURON
The NEURON stimulator environment is used in laboratories and classrooms around the world for build- ing and using computational models of networks of neurons (Hugenard & McCormick, 1994). NEURON had its beginnings in the laboratory of John W. Moore at Duke University, where Carnevale and Hines started to develop simulation software for neuroscience research. It has demonstrated benefits and been guided by feedback from the growing number of collaborative groups of neuroscientists who have used it to incorporate empirically based modeling into their research strategies (Carnevale & Hines, 2006.)
NEURON’s computational engine employs special algorithms that achieve high efficiency by ex- ploiting the structure of the equations that describe neuronal properties. It has functions that are tailored for conveniently controlling simulations and presenting the results of real neurophysiological problems graphically in ways that are quickly and intuitively grasped (Carnevale & Hines, 2006). Instead of forc- ing users to reformulate their conceptual models to fit the requirements of a general-purpose simulator, NEURON is designed to allow them to deal directly with familiar neuroscience concepts. Consequently, users can think in terms of the biophysical properties of membrane and cytoplasm, the branched archi- tecture of neurons, and the effects of synaptic communication between cells (Carnevale & Hines, 2006).
METHODOLOGY Background
One thing most parents realize early in their experience raising their children (or at least by their second child) is that the more they do for their kids and the less the child has to think for himself or herself, the harder it becomes for them to solve their own problems as they grow older. A study conducted by researchers at the U.S. Air Force Research Lab (AFRL) examined automation levels (Calhoun, 2013) whereby they allowed artificial intelligent agent to help UAV pilots fly an increasing number of aircraft.
In this study they found that the greater the automation level, the greater the number of problems slipped passed unobserved and possibly catastrophic the pilot. In other words, the pilot started to trust the AI and just stopped paying attention. We propose a similar study detailed below to identify if AR/MR teaching techniques lead to a similar detachment from learning and we propose some ideas and thoughts about how we can reduce this effect to allow the learner to truly realize the benefit of this amazing new medium for instructional delivery.
Our study will use qualitative methods to measure and analyze the potential relationship between student engagement and meaningful learning (Callaghan, Gardner, Horan & Scott, 2008). Mixed Reality Teaching & Learning Environment (MiRTLE) enables teachers and students participating in real-time mixed and online classes to interact with avatar representations of each other. The long- term hypothesis that will be investigated is that avatar representations of teachers and students will help create a sense of shared presence, engendering a sense of community and improving student engagement in online lessons. (Callaghan, Gardner, Horan & Scott, 2008).