Eigenvalues and
Eigenvectors
Pengantar Teori Statistika
STK 500
Eigenvalues and Eigenvectors
Example 1: if we have a matrix A:
2 44 -4 A
then
2 2 4 1 0 4 -4 0 1 2 - 4 4 -4 - 2 4 16 0 or 2 24 0 A Iwhich implies there are two roots or
eigenvalues :
=-6 and
=4.
Eigenvalues and Eigenvectors
For a square matrix A, let I be a
conformable identity matrix. Then the
scalars satisfying the polynomial equation
|A -
I| = 0 are called the eigenvalues (or
characteristic roots) of A.
The equation |A -
I| = 0 is called the
characteristic equation or the
determinantal equation.
Eigenvalues and Eigenvectors
For a matrix A with eigenvectors
, a
nonzero vector x such that Ax =
x is called
an eigenvector (or characteristic vector) of A
associated with
.
Example 1
if we have a matrix A:
2 44 -4 A 1 1 2 2 1 2 1 1 2 1 2 2 1 2 x x 2 4 6 4 -4 x x 2x 4x 6x 8x 4x 0 and 4x 4x 6x 4x 2x 0 Ax xFixing x
1=1 yields a solution for x
2of –2.
with eigenvalues
= -6 and
= 4, the
Example 1
Note that eigenvectors are usually normalized
so they have unit length, i.e.,
Thus our arbitrary choice to fix x
1=1 has no
impact on the eigenvector associated with
= -6.
For our previous example we have:
x e x'x 1 1 1 -2 -2 5 -2 5 1 1 -2 -2 5 x e x'x
Example 1
For matrix A and eigenvalue
= 4, we have
1 1 2 2 1 2 1 1 2 1 2 2 1 2 x x 2 4 4 4 -4 x x 2x 4x 4x 2x 4x 0 and 4x 4x 4x 4x 8x 0 Ax x
We again arbitrarily fix x
1=1, which now
yields a solution for x
2of 1/2.
Normalization of Eigenvectors
Normalization to unit length yields
Again our arbitrary choice to fix x
1=1 has no
impact on the eigenvector associated with
= 4.
1 1 1 2 1 1 1 2 2 2 5 1 5 5 1 1 4 5 1 2 1 2 2 x e x'xCharacteristic equation:
3 2 1 0 0 2 0 ( 2) 0 0 0 2 I A
Eigen value :
2Example 2
Find the eigenvalues and corresponding
eigenvectors for the matrix A. What is the
dimension of the eigenspace of each eigenvalue?
2
0
0
0
2
0
0
1
2
A
The eigenspace of
λ
= 2:
1 2 3 0 1 0 0 ( ) 0 0 0 0 0 0 0 0 x I A x x
x 0 , , 1 0 0 0 0 1 0 3 2 1 t s t s t s x x x 1 00 0 , : the eigenspace of corresponding to 2
0 1 s t s t R A
Thus, the dimension of its eigenspace is 2.
(1) If an eigenvalue
1occurs as a multiple root (k
times) for the characteristic polynominal, then
1has multiplicity k.
(2) The multiplicity of an eigenvalue is greater than
or equal to the dimension of its eigen space.
Find the eigenvalues and corresponding eigenspaces for 1 3 0 3 1 0 0 0 2 A 2 0 0 0 1 3 0 3 1
I A (
2) (2
4) 0 1 2 eigenvalues
4,
2 1 1 2 2The eigenspaces for these two eigenvalues are as follows. {(1, 1, 0)} Basis for 4 {(1, 1, 0), (0, 0, 1)} Basis for 2 B B
Example 3
Find the eigenvalues of the matrix A and find a basis for each of the corresponding eigenspaces
3 0 0 1 0 2 0 1 10 5 1 0 0 0 0 1 A Characteristic equation: 2 1 0 0 0 0 1 5 10 1 0 2 0 1 0 0 3 ( 1) ( 2)( 3) 0 I A Eigenvalues:
1 1,
2 2,
3 3According to the note on the previous slide, the dimension of the eigenspace of λ1 = 1 is at most to be 2
For λ2 = 2 and λ3 = 3, the dimensions of their eigenspaces are at most to be 1
1 (1) 1 1 2 1 3 4 0 0 0 0 0 0 0 5 10 0 ( ) 1 0 1 0 0 1 0 0 2 0 x x I A x x x 1 2 3 4 2 0 2 1 0 , , 0 2 0 2 0 1 x t x s s t s t x t x t 1 2 0 2 , 0 0 1 0
is a basis for the eigenspace
corresponding to 1 1
The dimension of the eigenspace of λ1 = 1 is 2
2 (2) 2 1 2 2 3 4 1 0 0 0 0 0 1 5 10 0 ( ) 1 0 0 0 0 1 0 0 1 0 x x I A x x x 1 2 3 4 0 0 5 5 , 0 1 0 0 x x t t t x t x 0 1 5 0
is a basis for the eigenspace
corresponding to 2 2
The dimension of the eigenspace of λ2 = 2 is 1
3 (3) 3 1 2 3 3 4 2 0 0 0 0 0 2 5 10 0 ( ) 1 0 1 0 0 1 0 0 0 0 x x I A x x x 1 2 3 4 0 0 5 5 , 0 0 0 1 x x t t t x x t 1 0 5 0
is a basis for the eigenspace
corresponding to 3 3
The dimension of the eigenspace of λ3 = 3 is 1
Eigenvalues and Eigenvectors
Theorem 1
. Eigenvalues for triangular matrices
If A is an n
n triangular matrix, then its eigenvalues are
the entries on its main diagonal
Finding eigenvalues for triangular and diagonal matrices
2 0 0 (a) 1 1 0 5 3 3 A 2 0 0 (a) 1 1 0 ( 2)( 1)( 3) 0 5 3 3 I A 1 2, 2 1, 3 3
Eigenvalues and Eigenvectors
Finding eigenvalues for triangular and diagonal matrices
1 0 0 0 0 0 2 0 0 0 (b) 0 0 0 0 0 0 0 0 4 0 0 0 0 0 3 A 1 2 3 4 5 (b) 1, 2, 0, 4, 3
Definition 1: A square matrix A is called
diagonalizable if there exists an invertible matrix
P such that P
–1AP is a diagonal matrix (i.e., P
diagonalizes A)
Diagonalization
Definition 2: A square matrix A is called
Diagonalization
Theorem 2
: Similar matrices have the same eigenvalues
If A and B are similar n
n matrices, then they have the
same eigenvalues
AP P
B B
A and are similar 1
1 1 1 1 1 1 1 ( ) I B I P AP P IP P AP P I A P P I A P P P I A P P I A I A
Since A and B have the same characteristic equation, they are with the same eigenvalues
For any diagonal matrix in the form of D = λI, P–1DP = D
Example 5
Eigenvalue problems and diagonalization programs
2 0 0 0 1 3 0 3 1 A
Characteristic equation:
2 1 3 0 3 1 0 ( 4)( 2) 0 0 0 2 I A
1 2 3 The eigenvalues :
4,
2,
2 (1)
4 the eigenvector 1 1 1 0 p(2)
2 the eigenvector 2 3 1 0 1 , 0 0 1 p p 1 1 2 3 1 1 0 4 0 0 [ ] 1 1 0 , and 0 2 0 0 0 1 0 0 2 P P AP p p p 2 1 3 1 [ ] 1 1 0 2 0 0 1 1 0 0 4 0 0 0 1 0 0 2 P P AP p p p If The above example can prove Thm. 2 numerically since the eigenvalues for both A and P–1AP are the same to be 4, –2, and –2
The reason why the matrix P is constructed with eigenvectors of A is demonstrated in Thm. 3 on the next slide
Diagonalization
Theorem 3: An nn matrix A is diagonalizable if and only if it
has n linearly independent eigenvectors
()
1
1 2 1 2
Since is diagonalizable, there exists an invertible s.t.
is diagonal. Let [ n] and ( , , , n), then
A P D P AP P D diag p p p 1 2 1 2 1 1 2 2 0 0 0 0 [ ] 0 0 [ ] n n n n PD p p p p p p
Note that if there are n linearly independent eigenvectors, it does not imply that there are n distinct eigenvalues. It is possible to have only one eigenvalue with multiplicity n, and there are n linearly independent eigenvectors for this
eigenvalue
However, if there are n distinct eigenvalues, then there are n linearly independent eigenvectors and thus A must be diagonalizable
1 1 2 1 1 2 2 (since ) [ n] [ n n] AP PD D P AP A A A p p p p p p , 1, 2, ,
(The above equations imply the column vectors of are eigenvectors of , and the diagonal entries in are eigenvalues of )
i i i i i A i n P A D A p p p 1 2
Because is diagonalizable is invertible
Columns in , i.e., , , , , are linearly independent (see p. 4.101 in the lecture note or p. 246 in the text book)
n
A P
P
p p p
Thus, has linearly independent eigenvectorsA n
1 2 1 2
Since has linearly independent eigenvectors , , with corresponding eigenvalues , , , then
n n A n p p p , 1, 2, , i i i A i n p p Let P [p p1 2 pn] ()
Diagonalization
1 2 1 2 1 1 2 2 1 2 1 2 [ ] [ ] [ ] 0 0 0 0 [ ] 0 0 n n n n n n AP A A A A PD p p p p p p p p p p p p 1 2 1
Since , , , are linearly independent
is invertible (see p. 4.101 in the lecture note or p. 246 in the text book)
is diagonalizable
(according to the definition of the diagonali n P AP PD P AP D A p p p
zable matrix on Slide 7.27)
Note that 's are linearly independent eigenvectors and the diagonal entries in the resulting diagonalized are eigenvalues of
i
i D A
p
Example 6
A matrix that is not diagonalizable
Show that the following matrix is not diagonalizable
1 2 0 1 A Characteristic equation: 1 2 2 ( 1) 0 0 1 I A 1 1
The eigenvalue 1, and then solve ( I A)x 0 for eigenvectors
1 1 0 2 1 eigenvector 0 0 0 I A I A p
Since A does not have two linearly independent eigenvectors,
Diagonalization
Steps for diagonalizing an n
n square matrix:
Step 2: Let
P [p p1 2 pn]Step 1: Find n linearly independent
eigenvectors for A
with corresponding eigenvalues
1, 2, n p p p
Step 3:
n D AP P
0 0 0 0 0 0 2 1 1 where Api
ipi, i 1, 2, , n 1, 2, , n
Example 6
Diagonalizing a matrix
diagonal. is such that matrix a Find 1 1 3 1 3 1 1 1 1 1 AP P P A Characteristic equation:
1 1 1 1 3 1 ( 2)( 2)( 3) 0 3 1 1 I A
1 2 3 The eigenvalues :
2,
2,
32 1
G.-J. E. 1 1 1 1 1 0 1 1 1 1 0 1 0 3 1 3 0 0 0 I A 1 2 1 3 1 0 eigenvector 0 1 x t x x t p 2 2
1 4 G.-J. E. 1 2 4 3 1 1 1 0 1 5 1 0 1 3 1 1 0 0 0 I A 1 1 4 1 2 4 2 3 1 eigenvector 1 4 x t x t x t pExample 6
3 3
G.-J. E. 3 2 1 1 1 0 1 1 0 1 0 1 1 3 1 4 0 0 0 I A 1 2 3 3 1 eigenvector 1 1 x t x t x t p 1 2 3 1 1 1 1[ ] 0 1 1 and it follows that
1 4 1 2 0 0 0 2 0 0 0 3 P P AP p p p
Example 6
Note: a quick way to calculate A
kbased on the
diagonalization technique
1 1 2 2 0 0 0 0 0 0 0 0 (1) 0 0 0 0 k k k k n n D D 1 1 1 1 1 repeat times 1 1 2 (2) 0 0 0 0 , where 0 0 k k k k k k k k k n D P AP D P AP P AP P AP P A P A PD P D Diagonalization
Diagonalization
Theorem 4: Sufficient conditions for diagonalization If an nn matrix A has n distinct eigenvalues, then the
corresponding eigenvectors are linearly independent and thus A is diagonalizable.
Proof :
Let λ1, λ2, …, λn be distinct eigenvalues and corresponding
eigenvectors be x1, x2, …, xn. In addition, consider that the first
m eigenvalues are linearly independent, but the first m+1
eigenvalues are linearly dependent, i.e.,
1 1 1 2 2 , (1)
m c c cm m
x x x x
where ci’s are not all zero. Multiplying both sides of Eq. (1) by A
yields 1 1 1 2 2 1 1 1 1 1 2 2 2 (2) m m m m m m m m A Ac Ac Ac c c c
x x x x x x x xOn the other hand, multiplying both sides of Eq. (1) by λm+1 yields
1 1 1 1 1 2 1 2 1 (3)
m m c m c m cm m m
x
x
x
xNow, subtracting Eq. (2) from Eq. (3) produces
1( m 1 1 1 2( m 1 2 2 m( m 1 m m=0
c
)x c
)x c
)xSince the first m eigenvectors are linearly independent, we can infer that all coefficients of this equation should be zero, i.e.,
1( m 1 1 2( m 1 2 m( m 1 m =0
c
) c
) c
)Because all the eigenvalues are distinct, it follows all ci’s equal to 0, which contradicts our assumption that xm+1 can be expressed as a linear combination of the first m eigenvectors. So, the set of n
eigenvectors is linearly independent given n distinct eigenvalues, and according to Thm. 3, we can conclude that A is diagonalizable.
Determining whether a matrix is diagonalizable
3 0 0 1 0 0 1 2 1 ABecause A is a triangular matrix, its eigenvalues are
1 1, 2 0, 3 3.
According
to Thm. 4
, because these three values
are distinct, A is diagonalizable.
Symmetric Matrices and Orthogonal
Diagonalization
A square matrix A is symmetric if it is equal to its
transpose:
TA A
Example symmetric matrices and nonsymetric matrices
5 0 2 0 3 1 2 1 0 A 1 3 3 4 B 5 0 1 0 4 1 1 2 3 C
(symmetric)
(symmetric)
(nonsymmetric)
Thm 5: Eigenvalues of symmetric matrices
If A is an nn symmetric matrix, then the following properties
are true.
a) A is diagonalizable (symmetric matrices are guaranteed to has n linearly independent eigenvectors and thus be
diagonalizable).
b) All eigenvalues of A are real numbers
c) If is an eigenvalue of A with multiplicity k, then has k
linearly independent eigenvectors. That is, the eigenspace of
has dimension k.
The above theorem is called the Real Spectral Theorem, and the set of eigenvalues of A is called the spectrum of A.
Symmetric Matrices and
Prove that a 2 × 2 symmetric matrix is diagonalizable.
b c c a AProof:
Characteristic equation:
0 ) ( 2 2 a b ab c b c c a A I
2 2 2 2 2 2 2 2 2 2 4 ) ( 4 2 4 4 2 ) ( 4 ) ( c b a c b ab a c ab b ab a c ab b a 0 As a function in
, this quadratic polynomial function
has a nonnegative discriminant as follows
0 4 ) ( (1) a b 2 c2 0 , a b c 0
itself is a diagonal matrix. 0 a c a A c b a 0 4 ) ( ) 2 ( a b 2 c2
The characteristic polynomial of A has two distinct real
roots, which implies that A has two distinct real
eigenvalues. According to
Thm. 5
, A is diagonalizable.
Symmetric Matrices and
Orthogonal Diagonalization
Orthogonal matrix : A square matrix P is called orthogonal if it is invertible and 1
(or )
T T T
P P PP P P I
Thm. 6: Properties of orthogonal matrices
An nn matrix P is orthogonal if and only if its column
vectors form an orthonormal set.
1 1 1 2 1 1 1 1 2 1 2 1 2 2 2 1 2 1 2 2 2 1 1 2 1 2 T T T n n T T T T n T T T n n n n n n n n P P I p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p
Proof: Suppose the column vectors of P form an orthonormal set, i.e.,
1 2 n
, where i j 0 for and i i 1.P p p p p p i j p p
Show that P is an orthogonal matrix.
5 3 5 5 3 4 5 3 2 5 1 5 2 3 2 3 2 3 1 0 PIf P is a orthogonal matrix, then
1 T TP P PP I 1 2 2 1 2 2 3 5 3 5 3 3 3 2 1 2 1 4 3 5 5 5 3 5 5 5 2 4 2 3 3 5 3 5 3 5 3 5
1
0
0
0
0
1
0
0
0
1
0
TPP
I
Example 9
1 2 1 3 2 3 1 1
2 2 3 3
we can produce 0 and
1. p p p p p p p p p p p p 1 2 3
So, { , , } is an orthonormal set. (Thm. 7.8 can be illustrated by this example.)
p p p 1 2 2 3 3 3 2 1 1 5 2 5 3 5 2 4 3 5 3 5 3 5
Moreover, let , , and 0 ,
p p p
Symmetric Matrices and
Thm. 7: Properties of symmetric matrices
Let A be an nn symmetric matrix. If 1 and 2 are distinct eigenvalues
of A, then their corresponding eigenvectors x1 and x2 are orthogonal.
(Thm. 6 only states that eigenvectors corresponding to distinct eigenvalues are linearly independent
1( 1 2) ( 1 1) 2 ( 1) 2 ( 1) 2 ( 1 ) 2 T T T A A A x x x x x x x x x x because is symmetric 1 2 1 2 1 2 2 2 2 2 2 ( ) ( ) ( ) ( ) ) A T T T A A x x x x x x x1 x (x x1 1 2 1 2 1 2 1 2 1 2
The above equation implies ( )( ) 0, and because
, it follows that 0. So, and are orthogonal.
x x x x x x For distinct eigenvalues of a symmetric matrix, their corresponding
eigenvectors are orthogonal and thus linearly independent to each other
Note that there may be multiple x1 and x2 corresponding to 1 and 2
Symmetric Matrices and
Orthogonal Diagonalization
Thm.8: Fundamental theorem of symmetric matrices
Let A be an nn matrix. Then A is orthogonally diagonalizable
and has real eigenvalues if and only if A is symmetric.
A matrix A is orthogonally diagonalizable if there exists an orthogonal matrix P such that P–1AP = D is diagonal.
()
1 1
1
is orthogonally diagonalizable
is diagonal, and is an orthogonal matrix s.t. ( ) ( ) T T T T T T T T T T A D P AP P P P A PDP PDP A PDP P D P PDP A Proof:
() See the next two slides
Symmetric Matrices and
Let A be an n
n symmetric matrix.
(1)
Find all eigenvalues of A and determine the multiplicity
of each.
According to Thm. 7, eigenvectors corresponding to
distinct eigenvalues are orthognoal
(2) For each eigenvalue of multiplicity 1, choose a unit
eigenvector.
(3) For each eigenvalue of multiplicity k
2, find a set of k
linearly independent eigenvectors. If this set {v
1, v
2, …,
v
k} is not orthonormal, apply the Gram-Schmidt
orthonormalization process.
Symmetric Matrices and
(4) The composite of steps (2) and (3) produces an orthonormal set of n eigenvectors. Use these orthonormal and thus linearly independent eigenvectors to form the columns of P.
i. According to Thm. 7, the matrix P is orthogonal
ii. Following the diagonalization process, D = P–1AP is diagonal
iii. therefore, the matrix A is orthogonally diagonalizable
Symmetric Matrices and
Determining whether a matrix is orthogonally diagonalizable 1 1 1 1 0 1 1 1 1 1 A 0 8 1 8 1 2 1 2 5 2 A 1 0 2 0 2 3 3 A 2 0 0 0 4 A Orthogonally diagonalizable Symmetric matrix
Example 10
Orthogonal diagonalization
Find an orthogonal matrix that diagonalizes .
2 2 2 2 1 4 2 4 1 P A A Sol: 0 ) 6 ( ) 3 ( ) 1 (
I A
2
1 6, 2 3 (has a multiplicity of 2)
1 1 2 2 1 1 1 3 3 3 1 (2)
6, v (1, 2, 2) u v ( , , ) v 2 2 3 (3)
3, v (2, 1, 0), v ( 2, 0, 1)Linearly independent but not orthogonal
Verify Thm. 7 that
v1·v2 = v1·v3 = 0
Gram-Schmidt Process: 3 2 2 4 2 2 3 3 2 5 5 2 2 (2, 1, 0), ( , , 1) v w w v w v w w w 3 2 2 1 2 4 5 2 5 5 3 3 5 3 5 3 5 2 3 ( , , 0), ( , , ) w w u u w w 5 3 5 3 2 5 3 4 5 1 3 2 5 3 2 5 2 3 1 0 P 3 0 0 0 3 0 0 0 6 P 1AP 1 2 3 u u u
Verify Thm. 7 that after the Gram-Schmidt orthonormalization process, i) w2 and w3 are eigenvectors of A corresponding to the eigenvalue of 3, and ii) v1·w2 = v1·w3 = 0
Beberapa Teorema Akar Ciri dan Vektor Ciri
Jika λ adalah akar ciri matriks A dengan vektor ciri
padanannya x, maka untuk fungsi tertentu g(A)
akan mempunyai akar ciri g(λ) dengan x vektor ciri
padanannya.
Kasus khusus :
1.
Jika λ adalah akar ciri matriks A, maka cג adalah
akar ciri matriks cA dengan c≠0 sebarang skalar
Bukti : c A x = c λ x
x vektor ciri padanan λ dari matriks A
x vektor ciri padanan c λ dari matriks cA
Beberapa Teorema Akar Ciri dan Vektor Ciri
2. Jika ג adalah akar ciri matriks A, dengan x vektor ciri padanannya, maka cג+k adalah akar ciri matriks (cA+kI) dengan x vektor ciri padanannya.
Bukti : c A x + k x = c λ x + k x (c A + k I)x = (c λ + k) x
(tidak dapat diperluas untuk A + B, dengan A , B sebarang matriks n x n )
3. λ2 adalah akar ciri dari matriks A2 (dapat diperluas untuk
Ak)
Bukti : A x = λ x
A(A x) = A(λ x )
Beberapa Teorema Akar Ciri dan Vektor Ciri
4. 1/ λ adalah akar dari matriks A-1
Bukti : A x = λ x
A-1 (A x) = A-1(λ x ) x= λ A-1x
A-1x = λ-1 x
5. Kasus (1) dan (2) dapat digunakan untuk mencari akar ciri dan vektor ciri dari polinomial A
Contoh :
(A3 + 4 A2 -3 A + 5 I ) x = A3x + 4 A2 x -3 Ax + 5 x
= λ 3x + 4 λ 2 x -3 λ x + 5 x =(λ3 + 4λ 2 -3 λ + 5)x
λ3 + 4 λ 2 -3 λ + 5 adalah akar ciri dari A3 + 4 A2 -3 A + 5 I dan x vektor ciri padanannya
Beberapa Teorema Akar Ciri dan Vektor Ciri
Sifat (5) dapat diperluas untuk deret tak hingga.
☺
Misal : akar ciri A adalah λ, maka (1- λ) adalah
akar ciri dari (I-A).
☺
Jika (I-A) nonsingular, maka (1- λ)
-1adalah akar
ciri dari (I-A)
-1.
☺
Jika -1< λ <1, maka (1- λ)
-1=1+ λ + λ
2+....
☺
Jika akar ciri A memenuhi -1< λ <1, maka (I-A)
-1Beberapa Teorema Akar Ciri dan Vektor Ciri
6.
Jika matriks A berukuran (n x n) dengan akar ciri
λ
1, ..., λ
nmaka
a.
ǀ
Aǀ=∏ λ
ib.
tr(A)=∑ λ
iBukti :
(-λ)
3+ (-λ)
2tr
1
(A) +(-λ) tr
2(A) + ǀAǀ=0
Dengan tr
i(A)= jumlah minor utama, tr
1(A)= tr
(A)
tr
2(A )=ǀa
11a
22ǀ+ ǀa
11a
33ǀ + ǀa
22a
33ǀ
tr
3(A )=ǀa
11a
22a
33ǀ
0 33 32 31 23 22 21 13 12 11 a a a a a a a a aBeberapa Teorema Akar Ciri dan
Vektor Ciri
Bukti (lanjutan):
Secara umum
(-λ)
n+(-λ)
n-1tr
1(A) +(-λ)
n-2tr
2(A) + ...
+(-λ)tr
n-1(A) +ǀAǀ=0
Jika λ
1, ..., λ
nakar ciri dari persamaan tersebut maka
(λ
1-λ) (λ
2-λ)... (λ
n-λ)=0
(-λ)
n+(-λ)
n-1∑ λ
i
+(-λ)
n-2∑
i ≠jλ
iλ
j+...+ ∏ λ
i=0
Beberapa Teorema Akar Ciri dan Vektor Ciri
♪
Jika A dan B berukuran (n x n) atau A berukuran
(n x p) dan B berukuran (p x n), maka akar ciri
(tak nol) AB sama dengan akar ciri BA. Jika x
vektor ciri AB, maka Bx vektor ciri BA
♪
Jika A berukuran (n x n), maka
1.
Jika P (n x n) nonsingular, maka A dan P
-1AP
mempunyai akar ciri yang sama
2.
Jika C (n x n) matriks ortogonal, A dan
Teorema Matriks Simetrik
1.
a. Akar ciri λ
1, ..., λ
nadalah real
b. Vektor ciri x
1, ..., x
nbersifat ortogonal
Bukti (1a):
Ambil λ bilangan kompleks dengan x vektor ciri padanannya
Jika λ= a+ib dan λ*= a-ib, x={x
i}= a+ib dan x*={x
i*}= a-ib
Maka A x = λ x→
x*’ A x
=x*’λx= λ x*’x
dan A x* = λ*x* →
x*’Ax
= (Ax *)’x =(λ*x*)’ x=λ*
x*’x
Sehingga λ* x*’x = λ x*’x,dan x*’x≠0 adalah jumlah
kuadrat → λ* = λ atau a+ib= a-ib berarti b=0
Teorema Matriks Simetrik
Bukti (1b):
Misalkan λ
1≠ λ
2dengan vektor ciri x
1≠x
2dan
A=A’,
serta Ax
k= λ x
kλ
1x
2’x
1= x
2’ λ
1x
1= x
2’ Ax
1= x
1’ A’x
2= x
1’ Ax
2= x
1’ λ
2x
2=
λ
2x
1’ x
2→ λ
1, λ
2≠0, maka x
1’x
2=0 (ortogonal)
2.
A dapat dinyatakan sebagai A=CDC’ (dekomposisi
spektral) dengan D adalah matriks diagonal dengan
unsur diagonalnya λ
idan C adalah matriks dengan
unsur pada kolomnya x
1padanan akar ciri λ
iTeorema Matriks Simetrik
3.
Matriks (semi) definit positif
a.
Jika A definit positif, maka λ
i>0 untuk i=1,...,n
b.
Jika A semi definit positif, maka λ
i≥0 untuk
i=1,...,n. Banyaknya akar ciri λ
i>0 sama
dengan rank(A)
Catatan : Jika A definit positif dapat ditentukan A
½.
Karena λ
i>0 maka pada dekomposisi spektral
A= A
½A
½=(A
½)
2Teorema Matriks Simetrik
4.
Jika A singular, idempoten, dan simetrik, maka A semi
definit positif
Bukti : A= A
’dan A =A
2maka A =A
2=A A= A’A
(semi definit positif)
5.
Jika A simetrik idempoten dengan rank (A)=r maka A
mempunyai r akar ciri bernilai 1 dan (n-r) akar ciri
bernilai 0
Bukti :
Ax
= λ x
dan
A
2x
= λ
2x
karena A =A
2→
A
2x
= λ
2x
↔
Ax
= λ
2x
↔
λ
x
= λ
2x
↔
(λ- λ
2)x=0 . Karena
x≠0 maka
(λ- λ
2) =0
→
λ
bernilai
0 atau 1.Berdasarkan teorema (4),
maka A semi definit
positif dengan r menyatakan banyaknya
λ>0
Teorema Matriks Simetrik
6. Jika A idempoten dan simetrik dengan pangkat r, maka rank(A)=tr(A)=r
Teorema
Jika A (n x n) matriks idempoten, P matriks nonsingular (n xn) dan C matriks ortogonal (n xn) maka :
a. I-A idempoten
b. A(I-A)=0 dan (I-A) A=0
c. P-1 A P idempoten
d. C’A C idempoten (jika A simetrik maka C’A C
idempoten simetrik
Jika A (n x p) dengan rank(A)=r, A- adalah kebalikan umum
A dan (A’ A) - adalah kebalikan umum (A’ A), maka A- A, A A- dan A(A’ A)- A idempoten
Quadratic Forms
A Quadratic From is a function
Q(x) = x’Ax
in k variables x
1,…,x
kwhere
and A is a k x k symmetric matrix.
x
1 2 kx
x
x
Note that a quadratic form has only squared
terms and crossproducts, and so can be
written
then
x
1A
2x
and
1 4
x
0
2
x
x'Ax
2 2 1 1 2 2Q( ) =
= x + 4x x - 2x
Suppose we have
x
k k ij i j i 1 j 1Q
a x x
Quadratic Forms
Spectral Decomposition and
Quadratic Forms
Any k x k symmetric matrix can be
expressed in terms of its k
eigenvalue-eigenvector pairs (
i, e
i) as
This is referred to as the spectral
decomposition of A.
k
' i i i i 1A
e e
Spectral Decomposition and
Quadratic Forms
For our previous example on eigenvalues and
eigenvectors we showed that
2 4
4
4
A
has eigenvalues
1= -6 and
2= -4, with
corresponding (normalized) eigenvectors
1 21
2
5
,
5
,
-2
1
5
5
e
e
Can we reconstruct A?
k ' i i i i 11
2
5
1
-2
5
2
1
6
-2
+4
1
5
5
5
5
5
5
1 -2
4
2
2 4
5
5
5 5
6
-2 4
4
2 1
4 4
5
5
5 5
A
e e
A
Spectral Decomposition and
Quadratic Forms
Spectral decomposition can be used to
develop/illustrate many statistical results/
concepts. We start with a few basic concepts:
- Nonnegative Definite Matrix – when any k x
k matrix A such that
0
x’Ax
x’ =[x
1, x
2, …, x
k]
the matrix A and the quadratic form are said
to be nonnegative definite.
Spectral Decomposition and
Quadratic Forms
- Positive Definite Matrix – when any k x k
matrix A such that
0 < x’Ax
x’ =[x
1, x
2, …, x
k]
[0, 0, …, 0]
the matrix A and the quadratic form are said
to be positive definite.
Spectral Decomposition and
Quadratic Forms
Example - Show that the following quadratic
form is positive definite:
2 2
1 2 1 2
6x + 4x - 4 2x x
We first rewrite the quadratic form in matrix
notation:
1 1 2 2x
6
-2 2
Q( ) = x
x
x
= '
-2 2
4
x
x Ax
Spectral Decomposition and
Quadratic Forms
Now identify the eigenvalues of the resulting
matrix A (they are
1= 2 and
2= 8).
2 1 0 6 -2 2 0 1 -2 2 4 6 - -2 2 6 4 -2 2 -2 2 0 2 2 4 -or 10 16 2 8 0 A ISpectral Decomposition and
Quadratic Forms
Next, using spectral decomposition we can
write:
k
'
'
'
'
' i i i 1 1 1 2 2 2 1 1 2 2 i 12
8
A
e e
e e
e e
e e
e e
where again, the vectors e
iare the
normalized and orthogonal eigenvectors
associated with the eigenvalues
1= 2 and
2= 8.
Spectral Decomposition and
Quadratic Forms
k ' i i i i 11
2
3
1
2
3
2
-1
2
3
+8
-1
3
2
3
3
3
3
1
2
2
2
6
3
3
3
3
2 2
2
2
8
2
4
2 2
2
1
3
3
3
3
A
e e
A
Sidebar - Note again that we can recreate the
original matrix A from the spectral
decomposition:
Spectral Decomposition and
Quadratic Forms
Because
1and
2are scalars,
premultiplication and postmultiplication by
x’ and x, respectively, yield:
' ' ' ' ' 2 2 1 1 2 2 1 22
8
2y + 8y
0
x Ax
x e e x
x e e x
where
At this point it is obvious that x’Ax is at
least nonnegative definite!
'
'
'
'1 1 1 1 2 2 2 2
y
x e
e x
and y
x e
e x
Spectral Decomposition and
Quadratic Forms
We now show that x’Ax is positive definite,
i.e.
' 2 2 1 22y + 8y
0
x Ax
From our definitions of y
1and y
2we have
' 1 1 1 ' 2 2 2y
or
y
e
e
x
x
y Ex
Spectral Decomposition and
Quadratic Forms
Since E is an orthogonal matrix, E’ exists.
Thus,
'x E y
But 0
x = E’y implies y
0 .
At this point it is obvious that x’Ax is
positive definite!
Spectral Decomposition and
Quadratic Forms
This suggests rules for determining if a k x k
symmetric matrix A (or equivalently, its
quadratic form x’Ax) is nonegative definite
or positive definite:
- A is a nonegative definite matrix iff
i
0, i =
1,…,rank(A)
- A is a positive definite matrix iff
i> 0, i =
1,…,rank(A)
Spectral Decomposition and
Quadratic Forms
Square Root Matrices
Because spectral decomposition allows us
to express the inverse of a square matrix in
terms of its eigenvalues and eigenvectors, it
enables us to conveniently create a square
root matrix.
Let A be a p x p positive definite matrix
with the spectral decomposition