1
DIMENSIONALITY REDUCTION IN PCA BASED ON DOMINANT EIGEN VECTORS
MANSI AGRAWAL
Global Engineering College, Jabalpur [email protected]
MR. RITESH PANDEY Asst. Prof. Department of EC Global Engineering College, Jabalpur
ABSTRACT - PCA is one of the most widely used methods for face recognition. PCA requires a dimensionality reduction based on eigenvectors of modified covariance matrix. The modified covariance matrix method reduces the computational time as well as complexity up to an extent. In this paper, a unique method based on dominant Eigen values is described for further reduction of significant eigenvectors
Keywords: PCA, dimensionality reduction, Eigen values, Eigen vectors, covariance matrix and Euclidean distance.
I. INTRODUCTION
1.1 PRINCIPAL COMPONENT ANALYSIS (PCA):
PCA is wide used technique for data mining and Machine learning, it's a linear rework primarily based applied math technique.PCA provides an potent tool for knowledge analysis and pattern recognition that is usually employed in signal processing and image processing. In statistics, PCA could be a methodology accustomed alters a multidimensional dataset to lower dimensions for analysis, visual image or knowledge compression.
spatial property reduction is
predicated on changed variance matrix technique that reduces the quality and range of computation drastically.PCA is Additionally wide used methodology for face recognition and illustration, during this methodology a facial image is represented as weighted superposition of basic vectors i.e.
Manfred Eigen vectors, like basic Manfred Eigen values. These basic vectors ar shaped by victimization image dataset itself. basic vectors are orthogonal, thus knowledge redundancy won’t be a tangle here.
Generally one person can give 3-4
2 test pictures, if there are k range of person to be determine then there'll be 4*k range of check image knowledge set, let it's M. As range of person to be known will increase therefore the range of test pictures can increase. per the tactic of PCA identity verification technique can involve a minimum of M ranges of characteristic vectors which is able to increase if number of persons to be known is increased; this can cause a rise in quality likewise as procedure time.
1.2 spatial
property REDUCTION using modified covariance MATRIX:
PCA encompasses spatial property reduction, that is a good approach to scale back the information size.
this method involves prepare a coaching set A, of M pictures every of N×N matrix. currently convert these N columns of every matrix into 1D column, such every image is converted in N2×1 matrix. thus resultant A are going to be N2×M matrix.
As PCA deals with the Eigen values and Eigen vectors, we need to calculate the covariance matrix, C.
such that,
C=A*AT
As, A is a N2×M matrix, such that the resultant C will be N2×N2 matrix. Now, let us consider the modified covariance matrix is-
Ć=AT*A
Here, Ć will be M×M matrix
Now, from this modified covariance matrix we get M Eigen values and M Eigen vectors,
Each and every image of data set can be represented as a weighted sum of these characteristic vectors called Eigen vectors, the weights corresponding to each Eigen vector is nothing but a projection of image vector over that individual Eigen vector.
Complete method for face recognition using PCA is described in next to next section.
2. DOMINANT EIGEN VALUES:
2.1 DOMINANT EIGEN VALUES OF MODIFIED COVARIANCE MATRIX:
In algebra, Eigen values and Eigen values plays an important role.
every Eigen vector corresponds to an Eigen value, for a matrix if there are M numbers of Eigen values amongst initial K values are dominant,i.e. if we tend to prepare
3 all Eigen values in descending order then initial K values are dominant, thence initial K eigen vectors corresponding to these initial K dominant Eigen values are referredtoas dominant Eigen vectors. As mentioned in previous section we'll need minimum M number of Eigen vectors for M numbers of test images, however with the assistance of K dominant Eigen vectors we'll reduce the quantity of characteristic vectors from M to K wherever K<M.
The value K may be dynamically varied in line with the number of test images i.e. K may be 50% of M or it may be 30% of M.In PCA a picture is projected over the set of Eigen vectors and therefore the coefficients of projection are called weights, because the number of significant Eigen vectors are reduced that the weights are reduced this may reduce the complexness of the method and can save time and memory further. an entire algorithmic program for this methodology is explained in next section.
2.2 ALGORITHM FOR PCA BASED OF DIMENSIONALITY
REDUCTION USING EIGEN VALUES:
STEP 1: Create a Training Set and Load it
There are M no. of Images each of size NxN
STEP 2: Convert Face Images in to training set of Face vectors
STEP 3: Normalize the Face Vectors
4 STEP 4 : Finding Covariance Matrix
• TO calculate the Eigen Vectors, We Need to calculate the Covariance Matrix C
• C=A.AT
• Where A is N2xM Matrix
• A=[Φ1, Φ2, Φ3, Φ4---, ΦM]N2x M
STEP 5 : Need For Dimensionality Reduction
Dimensionality Reduction
• Calculate Eigen Vector From a Covariance Matrix of Reduced Dimensionality.
• New Covariance Matrix will be given as:
C=AT.A
• C=MxN2 . N2xM
• C=MxM
• If M=100 then C is a 100x100 Matrix.
K Eigen Vectors are chosen according to specific Eigen values λi. for example choose K dominant Eigen values amongst M Eigen values
STEP 6: Re gaining the original Dimension
Since the K selected Eigen vectors are of reduced dimensionality.
To Re gain the original Dimensionality of Eigen Vectors Perform following operations.
• [Vi]original
Dimension=[A]N2xM .[Vi]Reduced Dimensionality
5 STEP 7: Projection of Training Images on Eigen Vectors
• Projection of a Vectored Image into another vector is done by dot product method.
• It will give weights of image into the direction of each Eigen vector.
• The Weight matrix for each image will be found by :
• Wi=Φi (dot) Vi
• Since there are K no. Of Eigen vectors then there will be K no of Weights of an image.
• W will be a 1xK matrix
• Each element of W will be a weight of image in the direction of corresponding Eigen vector.
• Since there are M no. Of images then there will be M no of Weight matrix each of Dimension 1xK.
STEP 8: Test for a Match
For a given test image first find its weights in the direction of set of all Eigen vectors.
Wt is the weight matrix of Test image.
Wt=It(dot)Vi
Wt is also a 1xK Matrix
Next we have to test for a match
STEP 9: Finding Euclidian Distance Since we are having M no, of Weight matrix each of size 1xK, and a test weight matrix corresponding to the test image It.
We will find the Euclidian distances between test weight matrixes with all M no. Of training weight matrix.
Then we will find the minimum distant amongst M distances .
The minimum distance
corresponding to a given training image weight will be treated as a match
6 3. CONCLUSION:
The evaluation or analysis is dole out with the help of the Face Recognition Evaluator. It is an open source MATLAB interface.
Comparison is done on the basis of rate of recognition accuracy, testing time, training time, total execution time and total memory usage.
Comparative results obtained by testing the five i.e. PCA, LDA, ICA, SVM and HGPP algorithms on the ATT databases.
TRAINING TIME:
DATA SET TIME (ms/image)
PCA 0.2
LDA 0.4
ICA 9.5
SVM 0.6
HGPP 9.32 (s/img)
TESTING TIME:
DATA SET TIME
(ms/image)
PCA 0.3
LDA 0.2
ICA 0.1
SVM 0.3
HGPP 19.39 (s/img) TOTAL EXECUTION TIME:
DATA SET TIME
(ms/image)
PCA 0.5
LDA 0.6
ICA 9.7
SVM 0.9
HGPP 28.71 (s/img) The number of fundamental vectors reduced drastically without any substantial effect on the method of face recognition using PCA. That is why the newly mentioned method will reduce time of computation as well as complexity. This method can also be used for image compression and stenography
ABBREVIATIONS:
PCA - Principal Component Analysis LDA – Linear Discriminant Analysis ICA–Independent Component Analysis
SVM – Support Vector Machines HGPP – Histogram of Gabor Phase Patterns
4. REFERENCES:
[1] Lei Tian et al, “Learning iterative quantization binary codes for face recognition”, Elsevier, 2016.
7 [2] Ali Javed, “Face Recognition Based on Principal Component Analysis”, IJIGSP, 2013.
[3] Satonkar Suhas S. et al, “Face Recognition Using Principal Component Analysis and Linear Discriminant Analysis on Holistic Approach in Facial Images Database”, IOSR, 2012.
[4] Abhishek Singh and Saurabh Kumar, “Face Recognition Using PCA and Eigen Face Approach”, NITRKL 2012.
[5] Rafael do Espírito Santo, “ Principal Component Analysis applied to digital image compression”, EINSTEIN, 2012.
[6] G.N.Ramadevi and K.Usharani,
“Study On Dimensionality Reduction Techniques And Applications”, IJPAPER, 2013.