International Journal of Recent Advances in Engineering & Technology (IJRAET)
ISSN (Online): 2347 - 2812, Volume-1, Issue - 2, 2013 106
EFFICIENT REPRESENTATION AND ACCURATE RECOGNITION OF HUMAN FACES IN VARIED ILLUMINATION CONDITIONS
1Thippeswamy G 2A.Durga Bhavani
1Department of Computer Science, BMS Institute of Technology, Bangalore, Karnataka, India.
2Department of Computer Science BMS College for Women, Bangalore, Karnataka, India.
Email : [email protected]
Abstract - In this research paper, we have proposed a novel face recognition model which is capable working in varied illumination conditions. The proposed model is a cascade model and it is based on the concept of Retinex theory and dimensionality reduction. The Retinex and color constancy approach is used as a pre-processing tool to overcome the problem of illumination followed by Principal Component Analysis (PCA) for compact representation of face image data. The classification accuracy of the proposed model has been tested using an illumination variant YALE face database. The experimental results demonstrate that the performance of the proposed system is superior when compared to the existing classical Eigenface model.
I.
INTRODUCTION
During the past decade, face recognition has drawn significant attention from the perspective of different applications. A general statement of the face recognition problem can be formulated as follows. Given still or video images of a scene, the problem is to identify or verify one or more persons in the scene using a stored database of faces.
The environment surrounding a face recognition application can cover a wide spectrum – from a well controlled environment to an uncontrolled one. In a controlled environment, frontal profile photographs of human faces are taken in complete with a uniform background and identical poses among the participants. In the case of uncontrolled environment, recognition of human faces is to be done at different scales, positions, luminance and orientations; facial hair, makeup and turbans etc.
The challenges associated with face recognition can be attributed to many factors: The images of a face may vary due to the relative camera-face pose (frontal, tilted, upside down). Facial features such as beards, mustaches, and glasses may or may not be present and there is a great deal of variability among these components including shape, color and size. Due to the varying facial expression and
emotion, the appearance may change. Faces may be partially occluded by other objects or some other faces. Face images directly vary for different rotations about the camera’s optical axis
Several researchers around the world trying to devise models which are capable of handling the problems highlighted above. In total, one such classification of face recognition algorithms proposed by researchers is based on the representation chosen, namely appearance-based and features based. The former uses holistic texture features and is applied to either whole-face or specific regions in a face image where as latter uses facial features such as mouth, eyes, brows, cheeks etc. and the geometric relationship between them.
Some of the well known face recognition algorithms based on appearance are Principal Component Analysis (PCA) [6], Independent Component Analysis (ICA) [7], Linear Discriminant Analysis (LDA) [8] and Probabilistic Neural Network Analysis (PNNA) [9].
A number of illumination invariant face recognition approaches have been proposed in the past years. Existing approaches addressing the illumination variation problem fall in to two categories, viz., Passive approaches and Active approaches. Passive approaches attempts to over come the problem by studying the visible spectrum images in which the face appearance has been altered by illumination variations. Other category contains active approaches, in which the illumination variation problem is overcome by employing active imaging techniques to obtain face images captured in consistent illumination condition. Various passive approaches are illumination variation modeling, illumination invariant features, photometric normalization and 3D morphable models [18][19]. Some active approaches are Thermal infrared, Active-near IR illumination [20][21]
etc.
International Journal of Recent Advances in Engineering & Technology (IJRAET)
ISSN (Online): 2347 - 2812, Volume-1, Issue - 2, 2013 107
On the other hand, feature based models extract local features like eyes, nose and mouth and they are fed into the structural classifier for the purpose of classification and recognition. Some of the well-known available feature based models are yielding better results [15]16][17] .
It is observed from the past works that the PCA and LDA based models developed to overcome the problem of illumination although it is an ill-posed problem. In this direction, we have made one such attempt to develop an illumination invariant face recognition model using Retinex as as pre-processing tool and called as r-PCA (retinex- Principal Component Analysis). The details of the proposed model are brought out in the following sections.
The paper is organized as follows. Section 2 describes the proposed model describing preprocessing, construction of eigen-signatures and signature recognition. The experimental results are presented in section 3 followed by conclusion in section 4.
II. PROPOSED MODEL
II.A Retinex: A review
The idea of Retinex (retina of eye + cortex of brain) was conceived by Edwin Land in 1983 as a model of lightness and color perception of human vision system. Retinex is a multi-purpose image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting condition and generally enhances apparent spatial resolution. The RETINEX algorithm [11] can be used for the enhancement of the different regions of the image. At first the algorithm is applied on the input image in order to enhance details in dark areas of the image. The algorithm is invoked again on the inverse image (the result is re-inverted afterwards) to enhance details in bright areas of the image.
Both the RETINEX images together reveal more details as compared to the input image.
RETINEX algorithm [13] decomposes a given image S into two images: the reflectance images R, and the illumination image L, such that each point in the image domain S(x, y) can be expressed as:
S(x, y) = R(x, y). L(x, y) (1)
By taking the image to the logarithmic domain, we obtain:
s = l + r
where s = log S, l = log L and r = log R.
The algorithm assumes spatial smoothness of the illumination field. In addition, knowledge of the limited dynamic range of the reflectance is used as a constraint in
the recovery process. The physical nature of reflecting objects is such that they reflect only a part of the incident light. Thus, the reflectance is restricted to the range R ε [0, 1], and L ≥ S, which implies l ≥ s. Thus the retinex algorithm is used to reduce the image into reflectance and illumination image [11] [13]. The two iterations of the algorithms are applied on the original and inverted image to give bright and dark retinex which reveal more information from the original image.
The illumination image, L = exp (l), is tuned by a gamma correction with a free parameter γ to obtain a new illumination image L′ and multiply it by R, which gives the output image S′ = L′.R. The Gamma Correction is given by:
L′ = W. [L / W] 1/γ (2)
where W is the highest value of dynamic range (equal to 255 in 8-bit images). Multiplying L′ by R, we obtain the image S’ as given below:
S′= L′. R = (L′/L).S (3)
The RETINEX algorithm is applied over the input image and its inverse, to produce bright and dark RETINEX images. After Gamma correction these two images are combined together by using the average operation between bright and dark RETINEX images and is shown in Fig. 1.
(a) (b)
Fig.1 Retinex processed Images; (a) Bright Retinex (b) Dark Retinex
II.B Principal Component Analysis:
Principal Component Analysis for face representation is a very classical approach and has been used by many researchers. The RETNIEX processed image is subjected to PCA for extracting the features which are subsequently used for face classification. The PCA works as follows.
Let there be N number of training images. Let
A
i, i=1...N, be an image of size m x n. Let C be the average image of all N training images. Each imageA
ihas been converted into a column vector vi of length M (= m x n) and concatenated toInternational Journal of Recent Advances in Engineering & Technology (IJRAET)
ISSN (Online): 2347 - 2812, Volume-1, Issue - 2, 2013 108
form an high dimensional feature matrix, say U given by: [v1, v2, …, vN].
By subtracting the average image, say C, of all the images, we obtain the training matrix,
V = [v1 – C, v2 – C,…, vN – C].
The sample covariance matrix
Q
of size NXN is obtained as:V V Q
TThe eigenvectors,
e
i and the corresponding eigenvalues
i ofQ
are determined by solving the well-known eigenstructure decomposition problem:
ie
i Q e
i . Since the dimension of VVT is very large (normally N <<M), computation of covariance matrix of such a huge matrix is prohibitive and hence we have initially computed the eigenvectors of
Q
and the original eigenvectors of Q defined by VVT are computed as follows.λi =
iei = i
i
e
V 1
However, we can have at most N number of eigenvectors.
Though all the N eigenvectors are needed to represent images exactly, only a small number, k, is generally sufficient for capturing the primary characteristics of the objects. These k eigenvectors correspond to the largest k
eigenvalues that constitute the eigenface. Thus eigenface analysis can drastically reduce the dimension of the images (M) to the eigenface dimension (K) while keeping several of the most effective features that summarizes the original images.
III. EXPERIMENTAL RESULTS
In this section, we present the experimental results conducted on the standard illumination variant database called YALE face database. This YALE database has 165 face images containing 11 face images of each person. We have added using Image processing tool, illumination variant data with contrast ranging from -40, -20, 0, 20 and 40. Hence, the total number of face samples will be 825. We have created fifty five samples under each class.
Experiments have been conducted under varying number of training samples and also with varying dimension of feature vectors. The results are reported in Table 1. We have also reported in Table -1, the recognition accuracy obtained due to standard eigenface approach and it shall be observed that the proposed model is quite good when compare to the eigenface approach.
IV. CONCLUSION
We have presented a new approach where the Retinex and color constancy approach has been used as a pre-processing tool to eliminate the illumination problem and thereby improving the recognition accuracy of the classical eigenface model. The proposed algorithm has been tested on YALE database having more than 825 face images with varied illumination. The experimental results demonstrate that the performance of the proposed system is quite high when compared to eigenface approach.
Table 1: Recognition rate of the proposed model and eigenface technique for YALE face database.
No. of Training Samples
Methodology
Dimension of feature vector
20 30 40 50 60
Alternate samples (413)
Proposed Model
100 100 100 100 100
eigenface technique 82.5
82.4 81.5 82.6 83.7
One in three samples (275)
Proposed Model
75.6 72.9 78.5 79.7 79.6
eigenface technique
68.5 67.9 69.8 71.8 72.9
One in four samples (219)
Proposed Model
74.8 71.9 72.8 75.3 75.5
eigenface technique
65.9 64.9 65.4 67.1 67.3
International Journal of Recent Advances in Engineering & Technology (IJRAET)
ISSN (Online): 2347 - 2812, Volume-1, Issue - 2, 2013 109
REFERENCES
[1] W. Zhao, R. Chellappa, A. Rosenfeld and P .J.
Phillips. Face Recognition: Literature Survey.
Technical Report TR4167, University of Maryland, USA: 399 – 458, 2000.
[2] M.A. Mottaleb and A. Elgammal. Face Detection in complex environments from color images.
IEEE ICIP: 622-626, 1999.
[3] J. Ross and Beveridge et. al. A nonparametric statistical comparison of principal component and linear discriminant subspaces for face recognition.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition: 535 – 542, 2001.
[4] J. Lu, K. N. Plataniotis and A. N.
Venetsanopoulos. Face Recognition Using Kernel Direct Discriminant Analysis Algorithms.
IEEE Transactions on Neural Networks, 4(1):
117-126, 2003.
[5] M. Turk and A. Pentland. Face Recognition using Eigenfaces, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition , Maui, Hawaii: 586-591, 1991.
[6] P. C. Yuen and J. H. Lai. Independent Component Analysis of Face Images. IEEE workshop on Biologically Motivated Computer Vision, Seoul, 2000.
[7] K. Etemad and R. Chellappa. Discriminant Analysis for recognition of human face images.
Journal of the Optical Society of America A, 4(8):
1724–1733, 1997
[8] S Haykin. Neural Networks, A Comprehensive Foundation, Macmillan, New York, NY, 1994.
[9] D. S. Bolme, J. Ross Beveridge, M. L. Teixeira, and B. A. Draper. The CSU Face Identification Evaluation System: Its Purpose, Features and Structure. Proceedings 3rd International
Conference on Computer Vision Systems, Graz, Austria, 2003.
[10] R. Kimmel, D. Shaked and M. Elad. A Variational Framework for Retinex.
[11] D. P. Bertsekas. Non-Linear Programming.
Athena Scientific, Belmont, Massachusetts, 1995.
[12] Z. Rahmany, D.J. Jobson and G.A. Woodellz.
Retinex Processing for Automatic Image Enhancement. Human Vision and Electronic Imaging VII, SPIE Symposium on Electronic Imaging, SPIE 4662, 2002.
[13] Gonzalez, Woods. Digital Image Processing.
Pearson Education, Second Edition, India, 2002 [14] R. Brunelli and T. Poggio, Face Recognition:
features versus templates, IEEE Transacions on PAMI, 15(10): 1042-1052,1993
[15] I J cox, D W Jacobs, S Krishnamachari and P N Yianilos, Experiments on Feature based Face Recognition, NEC Research Institute, 1994 [16] Neago V E, Mitrache J, Preotesoiu S, A Feature
based Face Recognition approachusing Gabor wavelet filters cascaded with concurrent Neural modules, IEEE Transacions on PAMI:1-6 July 2006.
[17] P. Hallinan, A low dimensional representation of Human faces for arbitrary lighting conditions. In Proc. IEEE Conf., CVPR 1994.
[18] Shashua A, On photometric issue in 3d visual recognition from a single 2d image, IJCV1997.
[19] R Basri and D Jacobs, Lambertian reflectance and linear subspaces. In Proc. IEEE ICCV 2007.