• Tidak ada hasil yang ditemukan

Cramer’s Rule

Dalam dokumen Fundamentals of Matrix Algebra 3rd Edition (Halaman 169-187)

S We first compute the determinant ofAto see if we can apply Cramer’s Rule.

det(A) =

1 5 3

1 4 2

2 1 0

=49.

Since det(A)̸= 0, we can apply Cramer’s Rule. Following Theorem 18, we com- pute det

( A1(⃗b)

) , det

( A2(⃗b)

) and det

( A3(⃗b)

) .

det( A1(⃗b))

=

36 5 3

11 4 2

7 1 0

=49.

(We used a bold font to show where⃗breplaced the first column ofA.)

det (

A2(⃗b) )

=

1 36 3

1 11 2

2 7 0

=245.

det( A3(⃗b))

=

1 5 36

1 4 11

2 1 7

=196.

Therefore we can compute⃗x:

x1= det

( A1(⃗b)

) det(A) = 49

49 =1 x2=

det (

A2(⃗b) )

det(A) = 245 49 =5 x3=

det( A3(⃗b)) det(A) = 196

49 =4 Therefore

⃗x=

x1

x2

x3

=

 1

5 4

. ...

Let’s do another example.

.. Example 83 ..Use Cramer’s Rule to solve the linear systemA⃗x =⃗bwhere A=

[1 2 3 4 ]

and⃗b= [1

1 ]

.

3.5 Cramer’s Rule

S The determinant ofAis2, so we can apply Cramer’s Rule.

det (

A1(⃗b) )

= 1 2

1 4

=6.

det( A2(⃗b))

= 1 1

3 1

=4.

Therefore

x1= det

( A1(⃗b)

) det(A) = 6

2=3 x2=

det( A2(⃗b)) det(A) = 4

2=2

and

⃗x = [x1

x2

]

= [ 3

2 ]

. ...

We learned in Sec on 3.4 that when considering a linear systemA⃗x =⃗bwhereA is square, if det(A)̸=0 thenAis inver ble andA⃗x =⃗bhas exactly one solu on. We also stated in Key Idea 11 that if det(A) =0, thenAis not inver ble and so therefore eitherA⃗x=⃗bhas no solu on or infinite solu ons. Our method of figuring out which of these cases applied was to form the augmented matrix[

A ⃗b]

, put it into reduced row echelon form, and then interpret the results.

Cramer’s Rule specifies that det(A)̸=0 (so we are guaranteed a solu on). When det(A) =0 we are not able to discern whether infinite solu ons or no solu on exists for a given vector⃗b. Cramer’s Rule is only applicable to the case when exactly one solu on exists.

We end this sec on with a prac cal considera on. We have men oned before that finding determinants is a computa onally intensive opera on. To solve a linear system with 3 equa ons and 3 unknowns, we need to compute 4 determinants. Just think: with 10 equa ons and 10 unknowns, we’d need to compute 11 really hard de- terminants of 10×10 matrices! That is a lot of work!

The upshot of this is that Cramer’s Rule makes for a poor choice in solving nu- merical linear systems. It simply is not done in prac ce; it is hard to beat Gaussian elimina on.24

So why include it?Because its truth is amazing.The determinant is a very strange opera on; it produces a number in a very odd way. It should seem incredible to the

24A version of Cramer’s Rule is o en taught in introductory differen al equa ons courses as it can be used to find solu ons to certain linear differen al equa ons. In this situa on, the entries of the matrices are func ons, not numbers, and hence compu ng determinants is easier than using Gaussian elimina on.

Again, though, as the matrices get large, other solu on methods are resorted to.

161

reader that by manipula ng determinants in a par cular way, we can solve linear sys- tems.

In the next chapter we’ll see another use for the determinant. Meanwhile, try to develop a deeper apprecia on of math: odd, complicated things that seem completely unrelated o en are intricately ed together. Mathema cians see these connec ons and describe them as “beau ful.”

Exercises 3.5

In Exercises 1 – 12, matricesAand⃗bare given.

(a) Give det(A)and det(Ai)for alli.

(b) Use Cramer’s Rule to solveA⃗x =⃗b. If Cramer’s Rule cannot be used to find the solu on, then state whether or not a solu on exists.

1. A=

[ 7 7

7 9 ]

, ⃗b= [ 28

26 ]

2. A=

[ 9 5

4 7 ]

, ⃗b= [45

20 ]

3. A=

[8 16 10 20

] , ⃗b=

[48 60

]

4. A=

[0 6 9 10

] , ⃗b=

[ 6

17 ]

5. A=

[ 2 10

1 3 ]

, ⃗b= [42

19 ]

6. A=

[ 7 14

2 4 ]

, ⃗b= [1

4 ]

7. A=

3 0 3

5 4 4

5 5 4

, ⃗b=

24 0 31

8. A=

 4 9 3

5 2 13

1 10 13

,

⃗b=

28 35

7

9. A=

4 4 0

5 1 1

3 1 2

, ⃗b=

16 22 8

10. A=

 1 0 10

4 3 10

9 6 2

,

⃗b=

40

94 132

11. A=

 7 4 25

2 1 7

9 7 34

,

⃗b=

1

3 5

12. A=

6 7 7

5 4 1

5 4 8

,

⃗b=

 58

35

49

4 .

E E

We have o en explored new ideas in matrix algebra by making connec ons to our previous algebraic experience. Adding two numbers,x+y, led us to adding vectors

⃗x+⃗yand adding matricesA+B. We explored mul plica on, which then led us to solving the matrix equa onA⃗x = ⃗b, which was reminiscent of solving the algebra equa onax=b.

This chapter is mo vated by another analogy. Consider: when we mul ply an un- known numberxby another number such as 5, what do we know about the result?

Unless,x=0, we know that in some sense 5xwill be “5 mes bigger thanx.” Applying this to vectors, we would readily agree that 5⃗x gives a vector that is “5 mes bigger than⃗x.” Each entry in⃗xis mul plied by 5.

Within the matrix algebra context, though, we have two types of mul plica on:

scalar and matrix mul plica on. What happens to⃗xwhen we mul ply it by a matrix A? Our first response is likely along the lines of “You just get another vector. There is no definable rela onship.” We might wonder if there is ever the case where a matrix – vector mul plica on is very similar to a scalar – vector mul plica on. That is, do we ever have the case whereA⃗x = a⃗x, whereais some scalar? That is the mo va ng ques on of this chapter.

4.1 Eigenvalues and Eigenvectors

.

.

AS YOU READ. . . .

1. T/F: Given any matrixA, we can always find a vector⃗xwhereA⃗x=⃗x.

2. When is the zero vector an eigenvector for a matrix?

3. If⃗vis an eigenvector of a matrixAwith eigenvalue of 2, then what isA⃗v?

4. T/F: IfAis a 5×5 matrix, to find the eigenvalues ofA, we would need to find the roots of a 5thdegree polynomial.

We start by considering the matrixAand vector⃗xas given below.1 A=

[1 4 2 3 ]

⃗x = [1

1 ]

Mul plyingA⃗xgives:

A⃗x = [1 4

2 3 ] [1

1 ]

= [5

5 ]

=5 [1

1 ]

!

Wow! It looks like mul plyingA⃗xis the same as 5⃗x! This makes us wonder lots of things: Is this the only case in the world where something like this happens?2 IsA somehow a special matrix, andA⃗x =5⃗xfor any vector⃗xwe pick?3 Or maybe⃗xwas a special vector, and no ma er what 2×2 matrixAwe picked, we would haveA⃗x =5⃗x.4 A more likely explana on is this: given the matrixA, the number 5 and the vector

⃗xformed a special pair that happened to work together in a nice way. It is then natural to wonder if other “special” pairs exist. For instance, could we find a vector⃗x where A⃗x=3⃗x?

This equa on is hard to solveat first; we are not used to matrix equa ons where

⃗xappears on both sides of “=.” Therefore we put off solving this for just a moment to state a defini on and make a few comments.

..Defini on 27

.

. Eigenvalues and Eigenvectors

LetAbe ann×nmatrix,⃗xa nonzero1 column vector andλa scalar. If

A⃗x=λ⃗x,

then⃗xis aneigenvectorofAandλis aneigenvalueofA.

The word “eigen” is German for “proper” or “characteris c.” Therefore, aneigen- vectorofAis a “characteris c vector ofA.” This vector tells us something aboutA.

Why do we use the Greek le erλ(lambda)? It is pure tradi on. Above, we useda to represent the unknown scalar, since we are used to that nota on. We now switch toλbecause that is how everyone else does it.5 Don’t get hung up on this;λis just a number.

1Recall this matrix and vector were used in Example 40 on page 75.

2Probably not.

3Probably not.

4See footnote 2.

5An example of mathema cal peer pressure.

4.1 Eigenvalues and Eigenvectors Note that our defini on requires thatAbe a square matrix. IfAisn’t square then A⃗xandλ⃗xwill have different sizes, and so they cannot be equal. Also note that⃗xmust be nonzero. Why? What if⃗x =⃗0? Thenno ma er whatλis,A⃗x =λ⃗x. This would then imply thatevery numberis an eigenvalue; if every number is an eigenvalue, then we wouldn’t need a defini on for it.6 Therefore we specify that⃗x̸=⃗0.

Our last comment before trying to find eigenvalues and eigenvectors for given ma- trices deals with “why we care.” Did we stumble upon a mathema cal curiosity, or does this somehow help us build be er bridges, heal the sick, send astronauts into orbit, design op cal equipment, and understand quantum mechanics? The answer, of course, is “Yes.”7This is a wonderful topic in and of itself: we need no external applica- on to appreciate its worth. At the same me, it has many, many applica ons to “the real world.” A simple Internet seach on “applica ons of eigenvalues” with confirm this.

Back to our math. Given a square matrixA, we want to find a nonzero vector⃗x and a scalarλsuch thatA⃗x =λ⃗x. We will solve this using the skills we developed in Chapter 2.

A⃗x = λ⃗x original equa on

A⃗x−λ⃗x = ⃗0 subtractλ⃗xfrom both sides

(A−λI)⃗x = ⃗0 factor outx

Think about this last factoriza on. We are likely tempted to say A⃗x−λ⃗x = (A−λ)⃗x,

but this really doesn’t make sense. A er all, what does “a matrix minus a number”

mean? We need the iden ty matrix in order for this to be logical.

Let us now think about the equa on(A−λI)⃗x =⃗0. While it looks complicated, it really is just matrix equa on of the type we solved in Sec on 2.4. We are just trying to solveB⃗x=⃗0, whereB= (A−λI).

We know from our previous work that this type of equa on8always has a solu on, namely,⃗x = ⃗0. However, we want⃗x to be an eigenvector and, by the defini on, eigenvectors cannot be⃗0.

This means that we want solu ons to(A−λI)⃗x = ⃗0other than⃗x = ⃗0. Recall that Theorem 8 says that if the matrix(A−λI)is inver ble, then theonlysolu on to (A−λI)⃗x=⃗0is⃗x=⃗0. Therefore, in order to have other solu ons, we need(A−λI) to not be inver ble.

Finally, recall from Theorem 16 that noninver ble matrices all have a determi- nant of 0. Therefore, if we want to find eigenvaluesλand eigenvectors⃗x, we need det(A−λI) =0.

Let’s start our prac ce of this theory by findingλsuch that det(A−λI) =0; that is, let’s find the eigenvalues of a matrix.

6Recall footnote 17 on page 107.

7Except for the “understand quantum mechanics” part. Nobody truly understands that stuff; they just probablyunderstand it.

8Recall this is ahomogeneoussystem of equa ons.

165

.. Example 84 Find the eigenvalues ofA, that is, findλsuch that det(A−λI) =0, where

A= [1 4

2 3 ]

.

S (Note that this is the matrix we used at the beginning of this sec- on.) First, we write out whatA−λIis:

A−λI= [1 4

2 3 ]

−λ [1 0

0 1 ]

= [1 4

2 3 ]

[λ 0

0 λ

]

=

[1−λ 4

2 3−λ

]

Therefore,

det(A−λI) =

1−λ 4

2 3−λ

= (1−λ)(3−λ)8

=λ24λ−5

Since we want det(A−λI) = 0, we wantλ24λ−5 = 0. This is a simple quadra c equa on that is easy to factor:

λ24λ−5=0 (λ−5)(λ+1) =0

λ=1, 5

According to our above work, det(A−λI) =0 whenλ=1, 5. Thus, the eigen- values ofAare1 and 5. ..

Earlier, when looking at the same matrix as used in our example, we wondered if we could find a vector⃗xsuch thatA⃗x =3⃗x. According to this example, the answer is

“No.” With this matrixA, the only values ofλthat work are1 and 5.

Let’s restate the above in a different way: It is pointless to try to find⃗x where A⃗x =3⃗x, for there is no such⃗x. There are only 2 equa ons of this form that have a solu on, namely

A⃗x=−⃗x and A⃗x=5⃗x.

As we introduced this sec on, we gave a vector⃗xsuch thatA⃗x = 5⃗x. Is this the only one? Let’s find out while calling our work an example; this will amount to finding the eigenvectors ofAthat correspond to the eigenvector of 5.

4.1 Eigenvalues and Eigenvectors .. Example 85 Find⃗xsuch thatA⃗x=5⃗x, where

A= [1 4

2 3 ]

.

S Recall that our algebra from before showed that if A⃗x =λ⃗x then (A−λI)⃗x=⃗0.

Therefore, we need to solve the equa on(A−λI)⃗x=⃗0for⃗xwhenλ=5.

A−5I= [1 4

2 3 ]

5 [1 0

0 1 ]

=

[4 4

2 2

]

To solve(A−5I)⃗x =⃗0, we form the augmented matrix and put it into reduced row echelon form: [

4 4 0

2 2 0

] −→

rref

[1 1 0

0 0 0

] .

Thus

x1=x2

x2is free and

⃗x= [x1

x2

]

=x2

[1 1 ]

.

We have infinite solu ons to the equa onA⃗x=5⃗x; any nonzero scalar mul ple of the vector

[1 1 ]

is a solu on. We can do a few examples to confirm this:

[1 4 2 3

] [2 2 ]

= [10

10 ]

=5 [2

2 ]

;

[1 4 2 3

] [7 7 ]

= [35

35 ]

=5 [7

7 ]

; [1 4

2 3 ] [3

3 ]

= [15

15 ]

=5 [3

3 ] .. .

Our method of finding the eigenvalues of a matrixAboils down to determining which values ofλgive the matrix(A−λI)a determinant of 0. In compu ng det(A−λI), we get a polynomial inλwhose roots are the eigenvalues ofA. This polynomial is im- portant and so it gets its own name.

167

..Defini on 28

.

. Characteris c Polynomial

LetAbe ann×nmatrix. Thecharacteris c polynomialofA is thenthdegree polynomialp(λ) =det(A−λI).

Our defini on just stateswhatthe characteris c polynomial is. We know from our work so farwhywe care: the roots of the characteris c polynomial of ann×nmatrix Aare the eigenvalues ofA.

In Examples 84 and 85, we found eigenvalues and eigenvectors, respec vely, of a given matrix. That is, given a matrixA, we found valuesλand vectors⃗xsuch that A⃗x =λ⃗x. The steps that follow outline the general procedure for finding eigenvalues and eigenvectors; we’ll follow this up with some examples.

..Key Idea 14

.

. Finding Eigenvalues and Eigenvectors LetAbe ann×nmatrix.

1. To find the eigenvalues ofA, computep(λ), the char- acteris c polynomial ofA, set it equal to 0, then solve forλ.

2. To find the eigenvectors of A, for each eigenvalue solve the homogeneous system(A−λI)⃗x=⃗0.

.. Example 86 ..Find the eigenvalues ofA, and for each eigenvalue, find an eigen- vector where

A=

[3 15

3 9

] .

S To find the eigenvalues, we must compute det(A−λI)and set it equal to 0.

det(A−λI) =

3−λ 15

3 9−λ

= (3−λ)(9−λ)45

=λ26λ−2745

=λ26λ−72

= (λ−12)(λ+6)

Therefore, det(A−λI) =0 whenλ=6 and 12; these are our eigenvalues. (We

4.1 Eigenvalues and Eigenvectors should note thatp(λ) =λ26λ−72 is our characteris c polynomial.) It some mes helps to give them “names,” so we’ll sayλ1 =6 andλ2 =12. Now we find eigen- vectors.

Forλ1 =6:

We need to solve the equa on(A−(6)I)⃗x =⃗0. To do this, we form the appro- priate augmented matrix and put it into reduced row echelon form.

[3 15 0 3 15 0

] −→

rref

[1 5 0 0 0 0 ]

. Our solu on is

x1=5x2

x2is free;

in vector form, we have

⃗x=x2

[5 1

] .

We may pick any nonzero value forx2to get an eigenvector; a simple op on isx2=1.

Thus we have the eigenvector

x⃗1= [5

1 ]

.

(We used the nota onx⃗1to associate this eigenvector with the eigenvalueλ1.) We now repeat this process to find an eigenvector forλ2 =12: ..

In solving(A−12I)⃗x =⃗0, we find [15 15 0

3 3 0

] −→

rref

[1 1 0

0 0 0

] . In vector form, we have

⃗x=x2

[1 1 ]

.

Again, we may pick any nonzero value forx2, and so we choosex2 = 1. Thus an eigenvector forλ2is

x⃗2= [1

1 ]

.

To summarize, we have:

eigenvalueλ1=6 with eigenvectorx⃗1= [5

1 ]

and

eigenvalueλ2=12 with eigenvectorx⃗2 = [1

1 ]

.

169

We should take a moment and check our work: is it true thatAx⃗1=λ1x⃗1? Ax⃗1=

[3 15

3 9

] [5 1

]

= [30

6 ]

= (6) [5

1 ]

=λ1x⃗1.

Yes; it appears we have truly found an eigenvalue/eigenvector pair for the matrixA. ...

Let’s do another example.

.. Example 87 ..LetA=

[3 0

5 1

]

. Find the eigenvalues ofAand an eigenvector for each eigenvalue.

S We first compute the characteris c polynomial, set it equal to 0, then solve forλ.

det(A−λI) =

3−λ 0

5 1−λ

= (3−λ)(1−λ)

From this, we see that det(A−λI) =0 whenλ=3,1. We’ll setλ1 =3 and λ2 =1.

Finding an eigenvector forλ1:

We solve(A−(3)I)⃗x=⃗0for⃗xby row reducing the appropriate matrix:

[0 0 0 5 4 0

] −→

rref

[1 5/4 0

0 0 0

] .

Our solu on, in vector form, is

⃗x=x2

[5/4 1

] .

Again, we can pick any nonzero value forx2; a nice choice would eliminate the frac on.

Therefore we pickx2=4, and find x⃗1=

[5 4

] .

Finding an eigenvector forλ2:

4.1 Eigenvalues and Eigenvectors We solve(A−(1)I)⃗x=⃗0for⃗xby row reducing the appropriate matrix:

[4 0 0

5 0 0

] −→

rref

[1 0 0

0 0 0

] .

We’ve seen a matrix like this before,9but we may need a bit of a refreshing. Our first row tells us thatx1 =0, and we see that no rows/equa ons involvex2. We con- clude thatx2is free. Therefore, our solu on, in vector form, is

⃗x=x2

[0 1 ]

.

We pickx2=1, and find

x⃗2= [0

1 ]

.

To summarize, we have:

eigenvalueλ1=3 with eigenvectorx⃗1= [5

4 ]

and

eigenvalueλ2 =1 with eigenvectorx⃗2= [0

1 ]

.

...

So far, our examples have involved 2×2 matrices. Let’s do an example with a 3×3 matrix.

.. Example 88 ..Find the eigenvalues ofA, and for each eigenvalue, give one eigen- vector, where

A=

7 2 10

3 2 3

6 2 9

.

S We first compute the characteris c polynomial, set it equal to 0, then solve forλ. A warning: this process is rather long. We’ll use cofactor expansion along the first row; don’t get bogged down with the arithme c that comes from each step; just try to get the basic idea of what was done from step to step.

9See page 31. Our future need of knowing how to handle this situa on is foretold in footnote 5.

171

det(A−λI) =

7−λ 2 10

3 2−λ 3

6 2 9−λ

= (7−λ)

2−λ 3

2 9−λ

(2)

3 3

6 9−λ

+ 10

3 2−λ

6 2

= (7−λ)(λ211λ+24) +2(3λ−9) +10(6λ+18)

=−λ3+4λ2−λ−6

=(λ+1)(λ−2)(λ−3)

..In the last step we factored the characteris c polynomial−λ3+4λ2−λ−6. Factoring polynomials of degree>2 is not trivial; we’ll assume the reader has access to methods for doing this accurately.10

Our eigenvalues areλ1 = 1,λ2 =2 andλ3 = 3. We now find corresponding eigenvectors.

Forλ1=1:

We need to solve the equa on(A−(1)I)⃗x =⃗0. To do this, we form the appro- priate augmented matrix and put it into reduced row echelon form.

6 2 10 0

3 3 3 0

6 2 10 0

−→

rref

1 0 1.5 0 0 1 −.5 0

0 0 0 0

Our solu on, in vector form, is

⃗x=x3

3/2 1/2 1

.

We can pick any nonzero value forx3; a nice choice would get rid of the frac ons.

So we’ll setx3=2 and choosex⃗1=

3 1 2

as our eigenvector.

Forλ2=2:

We need to solve the equa on(A−2I)⃗x=⃗0. To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.

10You probably learned how to do this in an algebra course. As a reminder, possible roots can be found by factoring the constant term (in this case,6) of the polynomial. That is, the roots of this equa on could be±1,±2,±3 and±6. That’s 12 things to check.

One could also graph this polynomial to find the roots. Graphing will show us thatλ=3lookslike a root, and a simple calcula on will confirm that it is.

4.1 Eigenvalues and Eigenvectors

9 2 10 0

3 0 3 0

6 2 7 0

−→

rref

1 0 1 0 0 1 −.5 0

0 0 0 0

 Our solu on, in vector form, is

⃗x =x3

 1 1/2

1

.

We can pick any nonzero value forx3; again, a nice choice would get rid of the frac- ons. So we’ll setx3 =2 and choosex⃗2=

2 1 2

as our eigenvector.

Forλ3 =3:

We need to solve the equa on(A−3I)⃗x =⃗0. To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.

10 2 10 0

3 1 3 0

6 2 6 0

−→

rref

1 0 1 0

0 1 0 0

0 0 0 0

 Our solu on, in vector form, is (note thatx2=0):

⃗x=x3

1 0 1

.

We can pick any nonzero value forx3; an easy choice isx3 =1, sox⃗3 =

1 0 1

as our eigenvector.

To summarize, we have the following eigenvalue/eigenvector pairs:

eigenvalueλ1=1 with eigenvectorx⃗1=

3 1 2

eigenvalueλ2=2 with eigenvectorx⃗2=

2 1 2

eigenvalueλ3=3 with eigenvectorx⃗3=

1 0 1

 ...

Let’s prac ce once more.

173

.. Example 89 ..Find the eigenvalues ofA, and for each eigenvalue, give one eigen- vector, where

A=

2 1 1

0 1 6

0 3 4

.

S We first compute the characteris c polynomial, set it equal to 0, then solve forλ. We’ll use cofactor expansion down the first column (since it has lots of zeros).

det(A−λI) =

2−λ 1 1

0 1−λ 6

0 3 4−λ

= (2−λ)

1−λ 6

3 4−λ

= (2−λ)(λ25λ−14)

= (2−λ)(λ−7)(λ+2)

No ce that while the characteris c polynomial is cubic, we never actually saw a cubic; we never distributed the(2−λ)across the quadra c. Instead, we realized that this was a factor of the cubic, and just factored the remaining quadra c. (This makes this example quite a bit simpler than the previous example.)

Our eigenvalues areλ1 = 2,λ2 =2 andλ3 = 7. We now find corresponding eigenvectors.

Forλ1=2:

We need to solve the equa on(A−(2)I)⃗x =⃗0. To do this, we form the appro- priate augmented matrix and put it into reduced row echelon form.

4 1 1 0

0 3 6 0

0 3 6 0

−→

rref

1 0 3/4 0

0 1 2 0

0 0 0 0

Our solu on, in vector form, is

⃗x=x3

3/4

2 1

.

We can pick any nonzero value forx3; a nice choice would get rid of the frac ons.

So we’ll setx3=4 and choosex⃗1=

3

8 4

as our eigenvector.

4.1 Eigenvalues and Eigenvectors Forλ2 =2:

We need to solve the equa on(A−2I)⃗x =⃗0. To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.

0 1 1 0

0 1 6 0

0 3 2 0

−→

rref

0 1 0 0

0 0 1 0

0 0 0 0

This looks funny, so we’ll look remind ourselves how to solve this. The first two rows tell us thatx2=0 andx3=0, respec vely. No ce that no row/equa on usesx1; we conclude that it is free. Therefore, our solu on in vector form is

⃗x=x1

1 0 0

. ..

We can pick any nonzero value forx1; an easy choice isx1 =1 and choosex⃗2 =

1 0 0

as our eigenvector.

Forλ3 =7:

We need to solve the equa on(A−7I)⃗x =⃗0. To do this, we form the appropriate augmented matrix and put it into reduced row echelon form.

5 1 1 0

0 6 6 0

0 3 3 0

−→

rref

1 0 0 0

0 1 1 0

0 0 0 0

 Our solu on, in vector form, is (note thatx1=0):

⃗x=x3

0 1 1

.

We can pick any nonzero value forx3; an easy choice isx3 =1, sox⃗3 =

0 1 1

as our eigenvector.

To summarize, we have the following eigenvalue/eigenvector pairs:

eigenvalueλ1=2 with eigenvectorx⃗1=

3

8 4

eigenvalueλ2=2 with eigenvectorx⃗2=

1 0 0

175

Dalam dokumen Fundamentals of Matrix Algebra 3rd Edition (Halaman 169-187)