• Tidak ada hasil yang ditemukan

Sensing Techniques

2.1 Intra-Hand Inputs

2.1.1 Sensing Techniques

There are several ways of sensing intra-hand inputs. Input methods that utilize diverse sensors,

environment. This section examines various sensing techniques and their characteristics. For

“thumb-to-finger” or “finger-to-thumb” touches, one easy way of detecting these inputs is turning the body into a touchable surface by placing a capacitive touch surface on the finger [44, 57], palm [53], ring [56], or a nail [14]. These input surfaces can delicately and accurately detect small motions [44, 57], while the passive haptic [43, 44] enables easily understandable and performable inputs. In leveraging these advantages, Xu et al. [57] presented a system that can carry out text entry via a capacitive touch sensor pad on the index fingertip. Similarly, Kao et al. [14] presented a nail-mounted gestural input surface that can distinguish on-nail finger swipe and tap gestures with a capacitive touch sensor array. Although these works demonstrated that touch-sensitive body regions can provide quick and accurate input with a flexible printed circuit, which can fit on the curvature on the skin, difficulties are still faced in directly attaching the touchpad to the skin [44], and this may cause unwanted touches while doing daily tasks [74]. To overcome this issues, researchers are underway to develop a thinner and more stretchable form factor, similar to a tattoo [53, 75].

Alternatively, motion data can be used to detect hand and finger movements. This type of input technique is ready-to-use and simple to implement using a built-in motion sensor in the typ- ical smartwatch. For example, Wen et al. [76] presented Serendipity, a technique for recognizing unremarkable and fine-motor finger gestures, such as pinching, rubbing, tapping, squeezing, and waving fingers, using off-the-shelf smartwatches. Their supervised machine learning approach relied on high-fidelity sensor data can achieve an average f1-score of 87

Another way of detecting hand actions is to wear a glove with multiple sensors. The first wired data glove, Sayre Glove released in 1976, can sense the bending of fingers through flexible tubes that contain photocell on one end and a light source on the other to measure the amount of light passing [77]. After this kickoff project, many researchers have added sensors on gloves to detect more exact hand actions and poses. For example, DataGlove [77] can monitor the six DoFs of the position and motion of each 10 finger joints using low-frequency magnetic fields, and power Glove [77] (the first commercially available glove for entertainment) adds resistive-ink flex sensors to capture the bending of each finger. In more recent works, Lee et al. [78] and Jiang et al. [79] present a glove that is sensitive to force exertion with multiple force sensors in each finger segment for keyboard applications that assign multiple letters to each key. Although this

form factor of input technique is easy to wear and detects intra-hand actions, it is still difficult to wear continuously in daily life because it apparently becomes oily and sweaty [55].

The vision is a common approach in most current VR and AR devices. Thanks to high- resolution cameras and sophisticated processors, these devices can detect the hand poses and track their movement in real-time. This form of input requires a camera mounted on the head [56], shoulder [80, 81], chest [31], wrist [71], or a finger [40, 82]. These vision-based inputs also enable diverse action forms of intra-hand inputs, such as taps [80], pose [40, 51] and ges- tures [31] including free-hand inputs [83] in 3D spaces with relatively high accuracy [40, 71, 80].

However, there is a limitation in that an occluded hand cannot be captured [80, 84] and the lighting condition can be critical for the RGB-based vision system [85].

In addition to these conventional approaches, many researchers have focused on the bio- sensing technique that utilizes bio-acoustic or electronic signals generated while performing gestures. For example, electromyography (EMG) can detect the electrical signals of muscle activation. Saponas et al. citesaponas2008demonstrating, saponas2009enabling and Haque et al. [86] presented a system that detects EMG signals and translated it to input commands, such as hand-pointing, clicking, and pinching gestures. Bio-acoustic is another popular approach. For example, Amento et.al [87] and Zhang et al. [66, 67] presented a technique that can detect sounds that travel by bone conduction throughout the hand while performing tapping, rubbing, flick- ing, and unistroke thumb gestures. Similarly, Laput et al. [52] proposed a system that captures bio-acoustic signals by the accelerometer at a sampling rate of 4 kHz and recognizes flicks, claps, scratches, and taps gestures with this. These input techniques can provide always-available and diverse input sets from wrist-mounted high-fidelity sensors. While they are promising for future input techniques, challenges still remain, such as collecting a large set of background data and improving machine learning algorithms for a system that is robust to false-positive inputs in diverse environments [52].

This chapter examined these diverse sensing techniques in terms of their benefits and limi- tations. When designing a new input for wearable devices, one must carefully select the sensing techniques considering their trade-offs in fidelity sensing, usage environment, action set, and body location. Moreover, the design can yield a multiple-sensor system. For example, Ens et

al. [56] demonstrated how the combination of hand tracking by vision sensing and touch track- ing by the touch sensing on the ring device can be promising for supporting high-precision and low-fatigue interaction by complementing each other. In this manner, future input system de- sign should consider the sensing technique combination to better support the gesture sensing algorithm and reduce the error [86].