• Tidak ada hasil yang ditemukan

STK 500. Pengantar Teori Statistika. Eigenvalues and Eigenvectors

N/A
N/A
Protected

Academic year: 2021

Membagikan "STK 500. Pengantar Teori Statistika. Eigenvalues and Eigenvectors"

Copied!
82
0
0

Teks penuh

(1)

Eigenvalues and

Eigenvectors

Pengantar Teori Statistika

STK 500

(2)

Eigenvalues and Eigenvectors

Example 1: if we have a matrix A:

   2 44 -4 A

then



                          2  2 4 1 0 4 -4 0 1 2 - 4 4 -4 - 2 4 16 0 or 2 24 0 A I

which implies there are two roots or

eigenvalues :

=-6 and

=4.

(3)

Eigenvalues and Eigenvectors

For a square matrix A, let I be a

conformable identity matrix. Then the

scalars satisfying the polynomial equation

|A -

I| = 0 are called the eigenvalues (or

characteristic roots) of A.

The equation |A -

I| = 0 is called the

characteristic equation or the

determinantal equation.

(4)

Eigenvalues and Eigenvectors

For a matrix A with eigenvectors

, a

nonzero vector x such that Ax =

x is called

an eigenvector (or characteristic vector) of A

associated with

.

(5)

Example 1

if we have a matrix A:

   2 44 -4 A                                    1 1 2 2 1 2 1 1 2 1 2 2 1 2 x x 2 4 6 4 -4 x x 2x 4x 6x 8x 4x 0 and 4x 4x 6x 4x 2x 0 Ax x

Fixing x

1

=1 yields a solution for x

2

of –2.

with eigenvalues

= -6 and

= 4, the

(6)

Example 1

Note that eigenvectors are usually normalized

so they have unit length, i.e.,

Thus our arbitrary choice to fix x

1

=1 has no

impact on the eigenvector associated with

= -6.

For our previous example we have:

x e x'x                            1 1 1 -2 -2 5 -2 5 1 1 -2 -2 5 x e x'x

(7)

Example 1

For matrix A and eigenvalue

= 4, we have

                               1 1 2 2 1 2 1 1 2 1 2 2 1 2 x x 2 4 4 4 -4 x x 2x 4x 4x 2x 4x 0 and 4x 4x 4x 4x 8x 0 Ax x

We again arbitrarily fix x

1

=1, which now

yields a solution for x

2

of 1/2.

(8)

Normalization of Eigenvectors

Normalization to unit length yields

Again our arbitrary choice to fix x

1

=1 has no

impact on the eigenvector associated with

= 4.

                                   1 1 1 2 1 1 1 2 2 2 5 1 5 5 1 1 4 5 1 2 1 2 2 x e x'x

(9)

Characteristic equation:

3 2 1 0 0 2 0 ( 2) 0 0 0 2 I A

        

Eigen value :

 2

Example 2

Find the eigenvalues and corresponding

eigenvectors for the matrix A. What is the

dimension of the eigenspace of each eigenvalue?

2

0

0

0

2

0

0

1

2

A

(10)

The eigenspace of

λ

= 2:

1 2 3 0 1 0 0 ( ) 0 0 0 0 0 0 0 0 x I A x x

                                x 0 , , 1 0 0 0 0 1 0 3 2 1                                     t s t s t s x x x 1 0

0 0 , : the eigenspace of corresponding to 2

0 1 s t s t R A                             

Thus, the dimension of its eigenspace is 2.

(11)

(1) If an eigenvalue

1

occurs as a multiple root (k

times) for the characteristic polynominal, then

1

has multiplicity k.

(2) The multiplicity of an eigenvalue is greater than

or equal to the dimension of its eigen space.

(12)

Find the eigenvalues and corresponding eigenspaces for 1 3 0 3 1 0 0 0 2 A                              2 0 0 0 1 3 0 3 1

I A(

2) (2

 4) 0 1 2 eigenvalues

4,

2     1 1 2 2

The eigenspaces for these two eigenvalues are as follows. {(1, 1, 0)} Basis for 4 {(1, 1, 0), (0, 0, 1)} Basis for 2 B B

     

Example 3

(13)

Find the eigenvalues of the matrix A and find a basis for each of the corresponding eigenspaces

            3 0 0 1 0 2 0 1 10 5 1 0 0 0 0 1 A Characteristic equation: 2 1 0 0 0 0 1 5 10 1 0 2 0 1 0 0 3 ( 1) ( 2)( 3) 0 I A                       Eigenvalues:

1 1,

2  2,

3  3

According to the note on the previous slide, the dimension of the eigenspace of λ1 = 1 is at most to be 2

 For λ2 = 2 and λ3 = 3, the dimensions of their eigenspaces are at most to be 1

(14)

1 (1)  1 1 2 1 3 4 0 0 0 0 0 0 0 5 10 0 ( ) 1 0 1 0 0 1 0 0 2 0 x x I A x x                                          x 1 2 3 4 2 0 2 1 0 , , 0 2 0 2 0 1 x t x s s t s t x t x t                                                        1 2 0 2 , 0 0 1 0                                       

is a basis for the eigenspace

corresponding to 1 1

The dimension of the eigenspace of λ1 = 1 is 2

(15)

2 (2)   2 1 2 2 3 4 1 0 0 0 0 0 1 5 10 0 ( ) 1 0 0 0 0 1 0 0 1 0 x x I A x x                                          x 1 2 3 4 0 0 5 5 , 0 1 0 0 x x t t t x t x                                         0 1 5 0                          

is a basis for the eigenspace

corresponding to 2  2

The dimension of the eigenspace of λ2 = 2 is 1

(16)

3 (3)   3 1 2 3 3 4 2 0 0 0 0 0 2 5 10 0 ( ) 1 0 1 0 0 1 0 0 0 0 x x I A x x                                         x 1 2 3 4 0 0 5 5 , 0 0 0 1 x x t t t x x t                                         1 0 5 0                           

 is a basis for the eigenspace

corresponding to 3  3

The dimension of the eigenspace of λ3 = 3 is 1

(17)

Eigenvalues and Eigenvectors

Theorem 1

. Eigenvalues for triangular matrices

If A is an n

n triangular matrix, then its eigenvalues are

the entries on its main diagonal

 Finding eigenvalues for triangular and diagonal matrices

2 0 0 (a) 1 1 0 5 3 3 A            2 0 0 (a) 1 1 0 ( 2)( 1)( 3) 0 5 3 3 I A                    1 2, 2 1, 3 3        

(18)

Eigenvalues and Eigenvectors

 Finding eigenvalues for triangular and diagonal matrices

1 0 0 0 0 0 2 0 0 0 (b) 0 0 0 0 0 0 0 0 4 0 0 0 0 0 3 A                   1 2 3 4 5 (b)   1,   2,  0,   4,  3

(19)

Definition 1: A square matrix A is called

diagonalizable if there exists an invertible matrix

P such that P

–1

AP is a diagonal matrix (i.e., P

diagonalizes A)

Diagonalization

Definition 2: A square matrix A is called

(20)

Diagonalization

Theorem 2

: Similar matrices have the same eigenvalues

If A and B are similar n

n matrices, then they have the

same eigenvalues

AP P

B B

A and are similar   1

1 1 1 1 1 1 1 ( ) I B I P AP P IP P AP P I A P P I A P P P I A P P I A I A

                     

Since A and B have the same characteristic equation, they are with the same eigenvalues

For any diagonal matrix in the form of D = λI, P–1DP = D

(21)

Example 5

Eigenvalue problems and diagonalization programs

          2 0 0 0 1 3 0 3 1 A

Characteristic equation:

2 1 3 0 3 1 0 ( 4)( 2) 0 0 0 2 I A

           1 2 3 The eigenvalues :

 4,

 2,

 2 (1)

 4 the eigenvector 1 1 1 0            p

(22)

(2)

  2 the eigenvector 2 3 1 0 1 , 0 0 1                     p p 1 1 2 3 1 1 0 4 0 0 [ ] 1 1 0 , and 0 2 0 0 0 1 0 0 2 P P AP                    p p p 2 1 3 1 [ ] 1 1 0 2 0 0 1 1 0 0 4 0 0 0 1 0 0 2 P P AP                        p p p If

 The above example can prove Thm. 2 numerically since the eigenvalues for both A and P–1AP are the same to be 4, –2, and –2

The reason why the matrix P is constructed with eigenvectors of A is demonstrated in Thm. 3 on the next slide

(23)

Diagonalization

Theorem 3: An nn matrix A is diagonalizable if and only if it

has n linearly independent eigenvectors

()

1

1 2 1 2

Since is diagonalizable, there exists an invertible s.t.

is diagonal. Let [ n] and ( , , , n), then

A P D P AP P D diag       p p p  1 2 1 2 1 1 2 2 0 0 0 0 [ ] 0 0 [ ] n n n n PD                     p p p p p p

Note that if there are n linearly independent eigenvectors, it does not imply that there are n distinct eigenvalues. It is possible to have only one eigenvalue with multiplicity n, and there are n linearly independent eigenvectors for this

eigenvalue

However, if there are n distinct eigenvalues, then there are n linearly independent eigenvectors and thus A must be diagonalizable

(24)

1 1 2 1 1 2 2 (since ) [ n] [ n n] AP PD D P AP A A A        p p pp p p , 1, 2, ,

(The above equations imply the column vectors of are eigenvectors of , and the diagonal entries in are eigenvalues of )

i i i i i A i n P A D A    ppp 1 2

Because is diagonalizable is invertible

Columns in , i.e., , , , , are linearly independent (see p. 4.101 in the lecture note or p. 246 in the text book)

n

A P

P

p p p

Thus, has linearly independent eigenvectorsA n

1 2 1 2

Since has linearly independent eigenvectors , , with corresponding eigenvalues , , , then

n n A n    p p p , 1, 2, , i i i Ai npp  Let P [p p1 2 pn] ()

Diagonalization

(25)

1 2 1 2 1 1 2 2 1 2 1 2 [ ] [ ] [ ] 0 0 0 0 [ ] 0 0 n n n n n n AP A A A A PD                        p p p p p p p p p p p p 1 2 1

Since , , , are linearly independent

is invertible (see p. 4.101 in the lecture note or p. 246 in the text book)

is diagonalizable

(according to the definition of the diagonali n P AP PD P AP D A       p p p

zable matrix on Slide 7.27)

Note that 's are linearly independent eigenvectors and the diagonal entries in the resulting diagonalized are eigenvalues of

i

i D A

p

(26)

Example 6

 A matrix that is not diagonalizable

Show that the following matrix is not diagonalizable

1 2 0 1 A      Characteristic equation: 1 2 2 ( 1) 0 0 1 I A             1 1

The eigenvalue  1, and then solve ( IA)x0 for eigenvectors

1 1 0 2 1 eigenvector 0 0 0 I A I A p                 

Since A does not have two linearly independent eigenvectors,

(27)

Diagonalization

Steps for diagonalizing an n

n square matrix:

Step 2: Let

P [p p1 2 pn]

Step 1: Find n linearly independent

eigenvectors for A

with corresponding eigenvalues

1, 2, n p p p

Step 3:

               n D AP P

       0 0 0 0 0 0 2 1 1 where Api

ipi, i 1, 2, , n 1, 2, , n

 

(28)

Example 6

Diagonalizing a matrix

diagonal. is such that matrix a Find 1 1 3 1 3 1 1 1 1 1 AP P P A                

Characteristic equation:

1 1 1 1 3 1 ( 2)( 2)( 3) 0 3 1 1 I A

             1 2 3 The eigenvalues :

 2,

 2,

3

(29)

2 1 

G.-J. E. 1 1 1 1 1 0 1 1 1 1 0 1 0 3 1 3 0 0 0 I A                          1 2 1 3 1 0 eigenvector 0 1 x t x x t                                 p 2 2  

1 4 G.-J. E. 1 2 4 3 1 1 1 0 1 5 1 0 1 3 1 1 0 0 0 I A                            1 1 4 1 2 4 2 3 1 eigenvector 1 4 x t x t x t                                  p

Example 6

(30)

3 3 

G.-J. E. 3 2 1 1 1 0 1 1 0 1 0 1 1 3 1 4 0 0 0 I A                         1 2 3 3 1 eigenvector 1 1 x t x t x t                                 p 1 2 3 1 1 1 1

[ ] 0 1 1 and it follows that

1 4 1 2 0 0 0 2 0 0 0 3 P P AP                      p p p

Example 6

(31)

Note: a quick way to calculate A

k

based on the

diagonalization technique

1 1 2 2 0 0 0 0 0 0 0 0 (1) 0 0 0 0 k k k k n n D D                                   1 1 1 1 1 repeat times 1 1 2 (2) 0 0 0 0 , where 0 0 k k k k k k k k k n D P AP D P AP P AP P AP P A P A PD P D                             

Diagonalization

(32)

Diagonalization

Theorem 4: Sufficient conditions for diagonalization If an nn matrix A has n distinct eigenvalues, then the

corresponding eigenvectors are linearly independent and thus A is diagonalizable.

Proof :

Let λ1, λ2, …, λn be distinct eigenvalues and corresponding

eigenvectors be x1, x2, …, xn. In addition, consider that the first

m eigenvalues are linearly independent, but the first m+1

eigenvalues are linearly dependent, i.e.,

1 1 1 2 2 , (1)

m  cc  cm m

x x x x

where ci’s are not all zero. Multiplying both sides of Eq. (1) by A

yields 1 1 1 2 2 1 1 1 1 1 2 2 2 (2) m m m m m m m m A Ac Ac Ac c c c

           x x x x x x x x

(33)

On the other hand, multiplying both sides of Eq. (1) by λm+1 yields

1 1 1 1 1 2 1 2 1 (3)

m m c m c m cm m m

x

x

x  

x

Now, subtracting Eq. (2) from Eq. (3) produces

1( m 1 1 1 2( m 1 2 2 m( m 1 m m=0

c

)xc

)x  c

)x

Since the first m eigenvectors are linearly independent, we can infer that all coefficients of this equation should be zero, i.e.,

1( m 1 1 2( m 1 2 m( m 1 m =0

c

)c

)   c

)

Because all the eigenvalues are distinct, it follows all ci’s equal to 0, which contradicts our assumption that xm+1 can be expressed as a linear combination of the first m eigenvectors. So, the set of n

eigenvectors is linearly independent given n distinct eigenvalues, and according to Thm. 3, we can conclude that A is diagonalizable.

(34)

Determining whether a matrix is diagonalizable

           3 0 0 1 0 0 1 2 1 A

Because A is a triangular matrix, its eigenvalues are

1 1, 2 0, 3 3.

 

According

to Thm. 4

, because these three values

are distinct, A is diagonalizable.

(35)

Symmetric Matrices and Orthogonal

Diagonalization

A square matrix A is symmetric if it is equal to its

transpose:

T

A A

Example symmetric matrices and nonsymetric matrices

           5 0 2 0 3 1 2 1 0 A      1 3 3 4 B           5 0 1 0 4 1 1 2 3 C

(symmetric)

(symmetric)

(nonsymmetric)

(36)

Thm 5: Eigenvalues of symmetric matrices

If A is an nn symmetric matrix, then the following properties

are true.

a) A is diagonalizable (symmetric matrices are guaranteed to has n linearly independent eigenvectors and thus be

diagonalizable).

b) All eigenvalues of A are real numbers

c) If  is an eigenvalue of A with multiplicity k, then has k

linearly independent eigenvectors. That is, the eigenspace of

has dimension k.

The above theorem is called the Real Spectral Theorem, and the set of eigenvalues of A is called the spectrum of A.

Symmetric Matrices and

(37)

Prove that a 2 × 2 symmetric matrix is diagonalizable.

     b c c a A

Proof:

Characteristic equation:

0 ) ( 2 2             a b ab c b c c a A I

2 2 2 2 2 2 2 2 2 2 4 ) ( 4 2 4 4 2 ) ( 4 ) ( c b a c b ab a c ab b ab a c ab b a                0 

As a function in

, this quadratic polynomial function

has a nonnegative discriminant as follows

(38)

0 4 ) ( (1) ab 2  c2  0 ,    a b c 0

itself is a diagonal matrix. 0 a c a A c b a            0 4 ) ( ) 2 ( ab 2  c2 

The characteristic polynomial of A has two distinct real

roots, which implies that A has two distinct real

eigenvalues. According to

Thm. 5

, A is diagonalizable.

(39)

Symmetric Matrices and

Orthogonal Diagonalization

 Orthogonal matrix : A square matrix P is called orthogonal if it is invertible and 1

(or )

T T T

P  P PPP PI

Thm. 6: Properties of orthogonal matrices

An nn matrix P is orthogonal if and only if its column

vectors form an orthonormal set.

1 1 1 2 1 1 1 1 2 1 2 1 2 2 2 1 2 1 2 2 2 1 1 2 1 2 T T T n n T T T T n T T T n n n n n n n n P P I                               p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p

Proof: Suppose the column vectors of P form an orthonormal set, i.e.,

1 2 n

, where i j 0 for and i i 1.

Pp p p p p  ij p p 

(40)

Show that P is an orthogonal matrix.

              5 3 5 5 3 4 5 3 2 5 1 5 2 3 2 3 2 3 1 0 P

If P is a orthogonal matrix, then

1 T T

P  PPPI 1 2 2 1 2 2 3 5 3 5 3 3 3 2 1 2 1 4 3 5 5 5 3 5 5 5 2 4 2 3 3 5 3 5 3 5 3 5

1

0

0

0

0

1

0

0

0

1

0

T

PP

I

     

 

 

 

 

 

 

 

 

 

 

Example 9

(41)

1 2 1 3 2 3 1 1

2 2 3 3

we can produce 0 and

1.             p p p p p p p p p p p p 1 2 3

So, { , , } is an orthonormal set. (Thm. 7.8 can be illustrated by this example.)

p p p 1 2 2 3 3 3 2 1 1 5 2 5 3 5 2 4 3 5 3 5 3 5

Moreover, let  , , and 0 ,

                        p p p

Symmetric Matrices and

(42)

Thm. 7: Properties of symmetric matrices

Let A be an nn symmetric matrix. If 1 and 2 are distinct eigenvalues

of A, then their corresponding eigenvectors x1 and x2 are orthogonal.

(Thm. 6 only states that eigenvectors corresponding to distinct eigenvalues are linearly independent

1( 1 2) ( 1 1) 2 ( 1) 2 ( 1) 2 ( 1 ) 2 T T T A A Ax x   xxxxx xx x because is symmetric 1 2 1 2 1 2 2 2 2 2 2 ( ) ( ) ( ) ( ) ) A T T T A A     x xx xx xx1x(x x1  1 2 1 2 1 2 1 2 1 2

The above equation implies ( )( ) 0, and because

, it follows that 0. So, and are orthogonal.

 

 

      x x x x x x

 For distinct eigenvalues of a symmetric matrix, their corresponding

eigenvectors are orthogonal and thus linearly independent to each other

Note that there may be multiple x1 and x2 corresponding to 1 and 2

Symmetric Matrices and

Orthogonal Diagonalization

(43)

Thm.8: Fundamental theorem of symmetric matrices

Let A be an nn matrix. Then A is orthogonally diagonalizable

and has real eigenvalues if and only if A is symmetric.

A matrix A is orthogonally diagonalizable if there exists an orthogonal matrix P such that P–1AP = D is diagonal.

()

1 1

1

is orthogonally diagonalizable

is diagonal, and is an orthogonal matrix s.t. ( ) ( ) T T T T T T T T T T A D P AP P P P A PDP PDP A PDP P D P PDP A               Proof:

() See the next two slides

Symmetric Matrices and

(44)

Let A be an n

n symmetric matrix.

(1)

Find all eigenvalues of A and determine the multiplicity

of each.

According to Thm. 7, eigenvectors corresponding to

distinct eigenvalues are orthognoal

(2) For each eigenvalue of multiplicity 1, choose a unit

eigenvector.

(3) For each eigenvalue of multiplicity k

2, find a set of k

linearly independent eigenvectors. If this set {v

1

, v

2

, …,

v

k

} is not orthonormal, apply the Gram-Schmidt

orthonormalization process.

Symmetric Matrices and

(45)

(4) The composite of steps (2) and (3) produces an orthonormal set of n eigenvectors. Use these orthonormal and thus linearly independent eigenvectors to form the columns of P.

i. According to Thm. 7, the matrix P is orthogonal

ii. Following the diagonalization process, D = P–1AP is diagonal

iii. therefore, the matrix A is orthogonally diagonalizable

Symmetric Matrices and

(46)

 Determining whether a matrix is orthogonally diagonalizable          1 1 1 1 0 1 1 1 1 1 A           0 8 1 8 1 2 1 2 5 2 A      1 0 2 0 2 3 3 A       2 0 0 0 4 A Orthogonally diagonalizable Symmetric matrix

Example 10

(47)

 Orthogonal diagonalization

Find an orthogonal matrix that diagonalizes .

2 2 2 2 1 4 2 4 1 P A A            Sol: 0 ) 6 ( ) 3 ( ) 1 (

IA

 2

  1 6, 2 3 (has a multiplicity of 2)

 

 1 1 2 2 1 1 1 3 3 3 1 (2)

 6, v  (1,  2, 2)  uv  ( ,  , ) v 2 2 3 (3)

 3, v  (2, 1, 0), v  ( 2, 0, 1)

Linearly independent but not orthogonal

Verify Thm. 7 that

v1·v2 = v1·v3 = 0

(48)

Gram-Schmidt Process: 3 2 2 4 2 2 3 3 2 5 5 2 2 (2, 1, 0),  ( , , 1)       v w w v w v w w w 3 2 2 1 2 4 5 2 5 5 3 3 5 3 5 3 5 2 3 ( , , 0), (  , , )  w   wu u w w              5 3 5 3 2 5 3 4 5 1 3 2 5 3 2 5 2 3 1 0 P           3 0 0 0 3 0 0 0 6 P 1AP 1 2 3 u u u

Verify Thm. 7 that after the Gram-Schmidt orthonormalization process, i) w2 and w3 are eigenvectors of A corresponding to the eigenvalue of 3, and ii) v1·w2 = v1·w3 = 0

(49)

Beberapa Teorema Akar Ciri dan Vektor Ciri

Jika λ adalah akar ciri matriks A dengan vektor ciri

padanannya x, maka untuk fungsi tertentu g(A)

akan mempunyai akar ciri g(λ) dengan x vektor ciri

padanannya.

Kasus khusus :

1.

Jika λ adalah akar ciri matriks A, maka cג adalah

akar ciri matriks cA dengan c≠0 sebarang skalar

Bukti : c A x = c λ x

x vektor ciri padanan λ dari matriks A

x vektor ciri padanan c λ dari matriks cA

(50)

Beberapa Teorema Akar Ciri dan Vektor Ciri

2. Jika ג adalah akar ciri matriks A, dengan x vektor ciri padanannya, maka cג+k adalah akar ciri matriks (cA+kI) dengan x vektor ciri padanannya.

Bukti : c A x + k x = c λ x + k x (c A + k I)x = (c λ + k) x

(tidak dapat diperluas untuk A + B, dengan A , B sebarang matriks n x n )

3. λ2 adalah akar ciri dari matriks A2 (dapat diperluas untuk

Ak)

Bukti : A x = λ x

A(A x) = A(λ x )

(51)

Beberapa Teorema Akar Ciri dan Vektor Ciri

4. 1/ λ adalah akar dari matriks A-1

Bukti : A x = λ x

A-1 (A x) = A-1(λ x ) x= λ A-1x

A-1x = λ-1 x

5. Kasus (1) dan (2) dapat digunakan untuk mencari akar ciri dan vektor ciri dari polinomial A

Contoh :

(A3 + 4 A2 -3 A + 5 I ) x = A3x + 4 A2 x -3 Ax + 5 x

= λ 3x + 4 λ 2 x -3 λ x + 5 x =(λ3 + 4λ 2 -3 λ + 5)x

 λ3 + 4 λ 2 -3 λ + 5 adalah akar ciri dari A3 + 4 A2 -3 A + 5 I dan x vektor ciri padanannya

(52)

Beberapa Teorema Akar Ciri dan Vektor Ciri

Sifat (5) dapat diperluas untuk deret tak hingga.

Misal : akar ciri A adalah λ, maka (1- λ) adalah

akar ciri dari (I-A).

Jika (I-A) nonsingular, maka (1- λ)

-1

adalah akar

ciri dari (I-A)

-1

.

Jika -1< λ <1, maka (1- λ)

-1

=1+ λ + λ

2

+....

Jika akar ciri A memenuhi -1< λ <1, maka (I-A)

-1

(53)

Beberapa Teorema Akar Ciri dan Vektor Ciri

6.

Jika matriks A berukuran (n x n) dengan akar ciri

λ

1

, ..., λ

n

maka

a.

ǀ

Aǀ=∏ λ

i

b.

tr(A)=∑ λ

i

Bukti :

(-λ)

3

+ (-λ)

2

tr

1

(A) +(-λ) tr

2

(A) + ǀAǀ=0

Dengan tr

i

(A)= jumlah minor utama, tr

1

(A)= tr

(A)

tr

2

(A )=ǀa

11

a

22

ǀ+ ǀa

11

a

33

ǀ + ǀa

22

a

33

ǀ

tr

3

(A )=ǀa

11

a

22

a

33

ǀ

0 33 32 31 23 22 21 13 12 11        a a a a a a a a a

(54)

Beberapa Teorema Akar Ciri dan

Vektor Ciri

Bukti (lanjutan):

Secara umum

(-λ)

n

+(-λ)

n-1

tr

1

(A) +(-λ)

n-2

tr

2

(A) + ...

+(-λ)tr

n-1

(A) +ǀAǀ=0

Jika λ

1

, ..., λ

n

akar ciri dari persamaan tersebut maka

1

-λ) (λ

2

-λ)... (λ

n

-λ)=0

(-λ)

n

+(-λ)

n-1

∑ λ

i

+(-λ)

n-2

i ≠j

λ

i

λ

j

+...+ ∏ λ

i

=0

(55)

Beberapa Teorema Akar Ciri dan Vektor Ciri

Jika A dan B berukuran (n x n) atau A berukuran

(n x p) dan B berukuran (p x n), maka akar ciri

(tak nol) AB sama dengan akar ciri BA. Jika x

vektor ciri AB, maka Bx vektor ciri BA

Jika A berukuran (n x n), maka

1.

Jika P (n x n) nonsingular, maka A dan P

-1

AP

mempunyai akar ciri yang sama

2.

Jika C (n x n) matriks ortogonal, A dan

(56)

Teorema Matriks Simetrik

1.

a. Akar ciri λ

1

, ..., λ

n

adalah real

b. Vektor ciri x

1

, ..., x

n

bersifat ortogonal

Bukti (1a):

Ambil λ bilangan kompleks dengan x vektor ciri padanannya

Jika λ= a+ib dan λ*= a-ib, x={x

i

}= a+ib dan x*={x

i

*}= a-ib

Maka A x = λ x→

x*’ A x

=x*’λx= λ x*’x

dan A x* = λ*x* →

x*’Ax

= (Ax *)’x =(λ*x*)’ x=λ*

x*’x

Sehingga λ* x*’x = λ x*’x,dan x*’x≠0 adalah jumlah

kuadrat → λ* = λ atau a+ib= a-ib berarti b=0

(57)

Teorema Matriks Simetrik

Bukti (1b):

Misalkan λ

1

≠ λ

2

dengan vektor ciri x

1

≠x

2

dan

A=A’,

serta Ax

k

= λ x

k

λ

1

x

2

’x

1

= x

2

’ λ

1

x

1

= x

2

’ Ax

1

= x

1

’ A’x

2

= x

1

’ Ax

2

= x

1

’ λ

2

x

2

=

λ

2

x

1

’ x

2

→ λ

1

, λ

2

≠0, maka x

1

’x

2

=0 (ortogonal)

2.

A dapat dinyatakan sebagai A=CDC’ (dekomposisi

spektral) dengan D adalah matriks diagonal dengan

unsur diagonalnya λ

i

dan C adalah matriks dengan

unsur pada kolomnya x

1

padanan akar ciri λ

i

(58)

Teorema Matriks Simetrik

3.

Matriks (semi) definit positif

a.

Jika A definit positif, maka λ

i

>0 untuk i=1,...,n

b.

Jika A semi definit positif, maka λ

i

≥0 untuk

i=1,...,n. Banyaknya akar ciri λ

i

>0 sama

dengan rank(A)

Catatan : Jika A definit positif dapat ditentukan A

½

.

Karena λ

i

>0 maka pada dekomposisi spektral

A= A

½

A

½

=(A

½

)

2

(59)

Teorema Matriks Simetrik

4.

Jika A singular, idempoten, dan simetrik, maka A semi

definit positif

Bukti : A= A

dan A =A

2

maka A =A

2

=A A= A’A

(semi definit positif)

5.

Jika A simetrik idempoten dengan rank (A)=r maka A

mempunyai r akar ciri bernilai 1 dan (n-r) akar ciri

bernilai 0

Bukti :

Ax

= λ x

dan

A

2

x

= λ

2

x

karena A =A

2

A

2

x

= λ

2

x

Ax

= λ

2

x

λ

x

= λ

2

x

(λ- λ

2

)x=0 . Karena

x≠0 maka

(λ- λ

2

) =0

λ

bernilai

0 atau 1.Berdasarkan teorema (4),

maka A semi definit

positif dengan r menyatakan banyaknya

λ>0

(60)

Teorema Matriks Simetrik

6. Jika A idempoten dan simetrik dengan pangkat r, maka rank(A)=tr(A)=r

(61)

Teorema

Jika A (n x n) matriks idempoten, P matriks nonsingular (n xn) dan C matriks ortogonal (n xn) maka :

a. I-A idempoten

b. A(I-A)=0 dan (I-A) A=0

c. P-1 A P idempoten

d. C’A C idempoten (jika A simetrik maka C’A C

idempoten simetrik

Jika A (n x p) dengan rank(A)=r, A- adalah kebalikan umum

A dan (A’ A) - adalah kebalikan umum (A’ A), maka A- A, A A- dan A(A’ A)- A idempoten

(62)

Quadratic Forms

A Quadratic From is a function

Q(x) = x’Ax

in k variables x

1

,…,x

k

where

and A is a k x k symmetric matrix.

 

 

  

 

 

x

1 2 k

x

x

x

(63)

Note that a quadratic form has only squared

terms and crossproducts, and so can be

written

then

 

 

 

x

1

A

2

x

and

1 4

x

0

2

x

x'Ax

2 2 1 1 2 2

Q( ) =

= x + 4x x - 2x

Suppose we have

 

 



x

k k ij i j i 1 j 1

Q

a x x

Quadratic Forms

(64)

Spectral Decomposition and

Quadratic Forms

Any k x k symmetric matrix can be

expressed in terms of its k

eigenvalue-eigenvector pairs (

i

, e

i

) as

This is referred to as the spectral

decomposition of A.

k

' i i i i 1

A

e e

(65)

Spectral Decomposition and

Quadratic Forms

For our previous example on eigenvalues and

eigenvectors we showed that

 

2 4

4

4

A

has eigenvalues

1

= -6 and

2

= -4, with

corresponding (normalized) eigenvectors

1 2

1

2

5

,

5

,

-2

1

5

5

e

e

(66)

Can we reconstruct A?

 

 

 

k ' i i i i 1

1

2

5

1

-2

5

2

1

6

-2

+4

1

5

5

5

5

5

5

1 -2

4

2

2 4

5

5

5 5

6

-2 4

4

2 1

4 4

5

5

5 5

A

e e

A

Spectral Decomposition and

Quadratic Forms

(67)

Spectral decomposition can be used to

develop/illustrate many statistical results/

concepts. We start with a few basic concepts:

- Nonnegative Definite Matrix – when any k x

k matrix A such that

0

x’Ax

x’ =[x

1

, x

2

, …, x

k

]

the matrix A and the quadratic form are said

to be nonnegative definite.

Spectral Decomposition and

Quadratic Forms

(68)

- Positive Definite Matrix – when any k x k

matrix A such that

0 < x’Ax

x’ =[x

1

, x

2

, …, x

k

]



[0, 0, …, 0]

the matrix A and the quadratic form are said

to be positive definite.

Spectral Decomposition and

Quadratic Forms

(69)

Example - Show that the following quadratic

form is positive definite:

2 2

1 2 1 2

6x + 4x - 4 2x x

We first rewrite the quadratic form in matrix

notation:

  

 

 

 

1 1 2 2

x

6

-2 2

Q( ) = x

x

x

= '

-2 2

4

x

x Ax

Spectral Decomposition and

Quadratic Forms

(70)

Now identify the eigenvalues of the resulting

matrix A (they are

1

= 2 and

2

= 8).



  



                      2         1 0 6 -2 2 0 1 -2 2 4 6 - -2 2 6 4 -2 2 -2 2 0 2 2 4 -or 10 16 2 8 0 A I

Spectral Decomposition and

Quadratic Forms

(71)

Next, using spectral decomposition we can

write:

k

'

 

'

 

'

'

' i i i 1 1 1 2 2 2 1 1 2 2 i 1

2

8

A

e e

e e

e e

e e

e e

where again, the vectors e

i

are the

normalized and orthogonal eigenvectors

associated with the eigenvalues

1

= 2 and

2

= 8.

Spectral Decomposition and

Quadratic Forms

(72)

 

k ' i i i i 1

1

2

3

1

2

3

2

-1

2

3

+8

-1

3

2

3

3

3

3

1

2

2

2

6

3

3

3

3

2 2

2

2

8

2

4

2 2

2

1

3

3

3

3

A

e e

A

Sidebar - Note again that we can recreate the

original matrix A from the spectral

decomposition:

Spectral Decomposition and

Quadratic Forms

(73)

Because

1

and

2

are scalars,

premultiplication and postmultiplication by

x’ and x, respectively, yield:

' ' ' ' ' 2 2 1 1 2 2 1 2

2

8

2y + 8y

0

x Ax

x e e x

x e e x

where

At this point it is obvious that x’Ax is at

least nonnegative definite!

'

'

'

'

1 1 1 1 2 2 2 2

y

x e

e x

and y

x e

e x

Spectral Decomposition and

Quadratic Forms

(74)

We now show that x’Ax is positive definite,

i.e.

' 2 2 1 2

2y + 8y

0

x Ax

From our definitions of y

1

and y

2

we have

 

 

 

 

 

 

 

 

 

' 1 1 1 ' 2 2 2

y

or

y

e

e

x

x

y Ex

Spectral Decomposition and

Quadratic Forms

(75)

Since E is an orthogonal matrix, E’ exists.

Thus,

'

x E y

But 0

x = E’y implies y

0 .

At this point it is obvious that x’Ax is

positive definite!

Spectral Decomposition and

Quadratic Forms

(76)

This suggests rules for determining if a k x k

symmetric matrix A (or equivalently, its

quadratic form x’Ax) is nonegative definite

or positive definite:

- A is a nonegative definite matrix iff

i

0, i =

1,…,rank(A)

- A is a positive definite matrix iff

i

> 0, i =

1,…,rank(A)

Spectral Decomposition and

Quadratic Forms

(77)

Square Root Matrices

Because spectral decomposition allows us

to express the inverse of a square matrix in

terms of its eigenvalues and eigenvectors, it

enables us to conveniently create a square

root matrix.

Let A be a p x p positive definite matrix

with the spectral decomposition

k

' i i i i 1

(78)

Also let P be a matrix whose columns are

the normalized eigenvectors e

1

, e

2

, …, e

p

of

A, i.e.,

 

2 2 p

P

e

e

e

Then

k

'

 

' i i i i 1

A

e e

P P

where P’P = PP’ = I and

            1 2 p 0 0 0 0 0 0 0

(79)

Now since

(P

-1

P’)P

P’=P

P’(P

-1

P’)=PP’=I

we have

 

 

k -1 1 ' ' i i i 1 i

1

A

P

P

e e

              1 1 2 2 p 0 0 0 0 0 0

Next let

(80)

The matrix

1 k 1 ' ' 2 2 i i i i 1

P P

e e

A

is called the square root of A.

(81)

The square root of A has the following

properties:

' 1 1 2 2

A

A

1 1 2 2

A A

A

-1 1 1 -1 2 2 2 2

A A

A A

I

 

-1 -1 -1 1 1 2 2

where

2 2 -1

A A

A

A

A

(82)

Next let

-1

denote the matrix matrix

whose columns are the normalized

eigenvectors e

1

, e

2

, …, e

p

of A, i.e.,

 

2 2 p

P

e

e

e

Then

k

'

 

' i i i i 1

A

e e

P P

where P’P = PP’ = I and

            1 2 p 0 0 0 0 0 0 0

Referensi

Dokumen terkait

Penetapan hasil kualifikasi (Short List)g. Pengumuman hasil kualifikasi 11

Latar belakang penerimaan patung anak lembu sebagai sesembahan oleh bani

Dinas Pemuda Olah Raga Kebudayaan dan Pariwisata Provinsi Kalimantan Selatan bekerjasama dengan Unit Layanan Pengadaan Kota Banjarbaru mengundang penyedia Jasa

Terkait butir 2 (dua) tersebut maka Pokja Unit Layanan Pengadaan Politeknik Ilmu Pelayaran Makassar menyatakan membatalkan pemenang pekerjaan tersebut dan akan

China‘s diplomacy in climate change has several goals: protect Chinese sovereignty, acquire foreign aid and technical assistance, promote China‘s economic development, and promote

Turkey showed a great effort through their involvement in various European affairs such as member of the Europe Council 1949, North Atlantic Treaty Organization (NATO) 1952,

• Limited Number of Independent Variables The research conducted only incorporates incentives which consist of monetary incentives, tangible non-monetary incentives,

Tujuan penelitian ini untuk mengetahui pengaruh penagihan pajak dan kepatuhan wajib pajak terhadap tunggakan pajak pada 10 Kantor Pelayanan Pajak di Kanwil DJP