3.10 Appendix : Proof for the QMF property of WKP basis
Let us write the weighted Krawtchouk polynomial in (3.49) as Kn(x; p)= 1
pw (x; p)p
ρ(n; p)w (x; p) Kn(x; p) (3.88)
The objective is to find the frequency domain representation of the term w (x; p) Kn(x; p) in the above equation.
Let us assumeψn(x)= w (x; p) Kn(x; p) and z = e
jω
N+1
in (3.73). Accordingly, we get the Z-transform of ψn(x) as
ψn(ω)=
N
X
x=0
w (x; p) Kn(x; p) z−x (3.89)
The Rodrigues type formula associated with the Krawtchouk polynomial w (x; p) Kn(x; p) can be written as
N x
p q
!x
Kn(x; p)= ∆n
N−n
x
p q
!x
(3.90)
Therefore,
ψn(z)= XN x=0
∆n
N−n
x
p qz−1
!x
(3.91)
Using the properties of Z-transform, we can obtain the solution ψn(z)= 1− p
qz−1
!n
1+ p qz−1
!N−n
(3.92) From (3.92), we can infer
ψn(z)=ψN−n(−z) (3.93)
Substituting z=ejωin the above equation gives ψn
ejω
=ψN−n ej(ω+π)
(3.94) On substitutingω=ω− π2 in (3.94), we obtain [172]
ψn
ej(ω−π2) =
ψN−n
ej(ω+π2)
(3.95)
Therefore, the quadrature mirror property of ψn
ej(ω) and
ψN−n
ej(ω)
aboutω= π2is proved.
4
Robust Hand Posture Recognition Using Geometry-based Normalisation and DOM based Shape Description
Contents
4.1 Introduction . . . 108 4.2 Hand posture acquisition and database development . . . 110 4.3 System Implementation . . . 114 4.4 Experimental Studies and Results . . . 128 4.5 Summary . . . 146
The empirical study in Chapter 3 has shown the DOMs as efficient descriptors for representing shapes of different structural complexity and the analysis of the experiments on the MPEG-7 shape database suggests the DOMs as potential features for shape classification. This encourages to employ the DOMs as features for shape based hand posture description and classification.
The objective of this work is to propose a hand posture recognition technique based on the DOMs and experimentally validate the efficiency of DOMs as hand shape descriptors. This work also presents a rule- based method for automatically extracting the hand from the forearm region. The technique developed in this work provides a framework for hand posture based interactive tabletop applications. This chapter presents the proposed method and the experimental studies that comparatively validates the DOMs as hand posture features.
4.1 Introduction
Vision based interactive tabletops are surface computing systems that create a virtual environment for users based on hand posture interactions. They perform the operations of the conventional devices that include the mouse and the keyboard.
These tabletops are typically constructed using a single desktop computer linked to a projector and a camera.
The projector is rear or front-mounted to display the content on the surface of the table. The camera is used to capture the hand postures performed on the tabletop surface. The acquired images are processed by the hand posture recognition system in order to detect the hand posture and interpret the underlying information.
The retrieved information is passed to the computer as input commands for interaction. The position of the camera and the projector units vary depending upon the type of application. Similarly, the projection and the acquisition surfaces are either different or coupled together depending on the ease of the application. The schematic representation of a typical vision based tabletop interface system using a front-projected display is shown in Figure 4.1.
The hand posture recognition system developed in this work is aimed to find applicability in vision based tabletop interactions and hence, the experimental setup employed is designed to be in accord with the config- uration of hand posture based tabletop interfaces. The proposed system is a monocular vision based system using shape based methodologies for interpreting the hand postures. The acquired hand posture images are modeled using the binary silhouettes. The hand posture recognition system developed in this work addresses the three major issues in hand shape interpretation. They are:
• segmentation of the forearm and extraction of hand region.
4.1 Introduction
Projector
Camera
Table-top
Gesturer
Figure 4.1: Illustration of a tabletop user interface setup using a top-mounted camera for natural human-computer inter- action through hand postures.
• orientation normalization of the hand postures.
• accurate recognition of postures in the presence of view-angle and the user variations.
The identification of the hand region involves separating the hand from the forearm. The lack of posture information in the forearm makes it redundant and its presence increases the data size. In most of the previous works, the forearm region is excluded by either making the gesturers to wear full arm clothing or by limiting the forearm region into the scene while acquisition. However, such restrictions are not suitable in real-time applications. The orientation of the acquired posture changes due to the angle made by the gesturer with respect to the camera and vice-versa.
This research work proposes novel methods based on the anthropometric measures to automatically identify the hand and its constituent regions. The geometry of the posture is characterized in terms of the abducted fingers. This posture geometry is used to normalize for the orientation changes. These proposed normalization techniques are robust to similarity and perspective distortions. The main contributions reported in this chapter are:
(i) A rule based technique using the anthropometric measures of the hand is devised to identify the forearm and the hand regions.
(ii) A rotation normalization method based on the protruded\abducted fingers and the longest axis of the hand is devised.
(iii) A static hand posture database consisting of 10 posture classes and 4,230 samples is constructed.
(iv) DOMS are introduced as user and view-invariant hand posture descriptors. In comparison to DOMs, some of the state-of-the art shape descriptors, namely the Fourier descriptors, the geometric moments,
the Zernike moments, the Gabor wavelets and the PCA descriptors are also studied for user and view invariant hand posture recognition.
The proposed posture recognition framework is explained by dividing the system development into three sec- tions, namely,
1. Hand posture acquisition and database development 2. System implementation
3. Experimental studies and results
The posture acquisition and the database development section explains the experimental setup used for acquir- ing the hand postures and the construction of the hand posture database required for the experimental studies.
The section also includes a quantitative analysis on the variations in the shape of the hand postures in order val- idate the database for usability in the experimental studies on user and view independent hand posture descrip- tion. The section on system implementation presents the procedures and the techniques involved in realising the hand posture recognition system. The section on experimental studies and the results discusses the experiments performed to comparatively evaluate the efficiency of the proposed system with respect to the DOMs and the other shape features. The results of user invariant and view invariant recognition are independently presented.