• Tidak ada hasil yang ditemukan

Proper es of the Matrix Inverse

Dalam dokumen Fundamentals of Matrix Algebra 3rd Edition (Halaman 122-141)

25.



15 45 3 4

55 164 15 15

215 640 62 59

4 12 0 1



26.



1 0 2 8

0 1 0 0

0 4 29 110 0 3 5 19



27.



0 0 1 0

0 0 0 1

1 0 0 0

0 1 0 0



28.



1 0 0 0

0 2 0 0

0 0 3 0

0 0 0 4



In Exercises 29 – 36, a matrixAand a vector

⃗bare given. Solve the equa onA⃗x=⃗busing Theorem 8.

29. A= [3 5

2 3 ]

, ⃗b= [21

13 ]

30. A=

[1 4 4 15

] , ⃗b=

[21 77 ]

31. A=

[ 9 70

4 31 ]

, ⃗b= [2

1 ]

32. A=

[10 57

3 17

] , ⃗b=

[14

4 ]

33. A=

 1 2 12

0 1 6

3 0 1

,

⃗b=

17

5 20

34. A=

 1 0 3

8 2 13

12 3 20

,

⃗b=

34

159

243

35. A=

 5 0 2

8 1 5

2 0 1

,

⃗b=

 33

70

15

36. A=

1 6 0

0 1 0

2 8 1

,

⃗b=

69 10

102

2.7 Proper es of the Matrix Inverse at ways to tell whether or not a matrix is inver ble, and second, we study proper es of inver ble matrices (that is, how they interact with other matrix opera ons).

We start with collec ng ways in which we know that a matrix is inver ble. We actually already know the truth of this theorem from our work in the previous sec on, but it is good to list the following statements in one place. As we move through other sec ons, we’ll add on to this theorem.

..Theorem 9

.

. Inver ble Matrix Theorem

LetAbe ann×nmatrix. The following statements are equiv- alent.

(a) Ais inver ble.

(b) There exists a matrixBsuch thatBA=I.

(c) There exists a matrixCsuch thatAC=I.

(d) The reduced row echelon form ofAisI.

(e) The equa onA⃗x =⃗bhas exactly one solu on for ev- ery1 vector⃗b.

(f) The equa on A⃗x = ⃗0 has exactly one solu on (namely,⃗x =⃗0).

Let’s make note of a few things about the Inver ble Matrix Theorem.

1. First, note that the theorem uses the phrase “the following statements areequiv- alent.” When two or more statements are equivalent, it means that the truth of any one of them implies that the rest are also true; if any one of the statements is false, then they are all false. So, for example, if we determined that the equa- onA⃗x=⃗0had exactly one solu on (andAwas ann×nmatrix) then we would know thatAwas inver ble, thatA⃗x=⃗bhad only one solu on, that the reduced row echelon form ofAwasI, etc.

2. Let’s go through each of the statements and see why we already knew they all said essen ally the same thing.

(a) This simply states thatAis inver ble – that is, that there exists a matrix A1such thatA1A= AA1 = I. We’ll go on to show why all the other statements basically tell us “Ais inver ble.”

(b) If we know thatAis inver ble, then we already know that there is a matrix BwhereBA=I. That is part of the defini on of inver ble. However, we can also “go the other way.” Recall from Theorem 5 that even if all we know

113

is that there is a matrixBwhereBA=I, then we also know thatAB=I.

That is, we know thatBis the inverse ofA(and henceAis inver ble).

(c) We use the same logic as in the previous statement to show why this is the same as “Ais inver ble.”

(d) IfAis inver ble, we can find the inverse by using Key Idea 10 (which in turn depends on Theorem 5). The crux of Key Idea 10 is that the reduced row echelon form ofAisI; if it is something else, we can’t findA1(it doesn’t exist). Knowing thatAis inver ble means that the reduced row echelon form ofAisI. We can go the other way; if we know that the reduced row echelon form ofAisI, then we can employ Key Idea 10 to findA1, soAis inver ble.

(e) We know from Theorem 8 that ifAis inver ble, then given any vector⃗b, A⃗x =⃗bhas always has exactly one solu on, namely⃗x =A1⃗b. However, we can go the other way; let’s say we know thatA⃗x=⃗balways has exactly solu on. How can we conclude thatAis inver ble?

Think about how we, up to this point, determined the solu on toA⃗x =

⃗b. We set up the augmented matrix[ A ⃗b]

and put it into reduced row echelon form. We know that ge ng the iden ty matrix on the le means that we had a unique solu on (and not ge ng the iden ty means we either have no solu on or infinite solu ons). So ge ngIon the le means having a unique solu on; havingIon the le means that the reduced row echelon form ofAisI, which we know from above is the same asAbeing inver ble.

(f) This is the same as the above; simply replace the vector⃗bwith the vector

⃗0.

So we came up with a list of statements that are allequivalentto the statement

Ais inver ble.” Again, if we know that if any one of them is true (or false), then they are all true (or all false).

Theorem 9 states formally that ifAis inver ble, thenA⃗x =⃗bhas exactly one solu- on, namelyA1⃗b. What ifAis not inver ble? What are the possibili es for solu ons toA⃗x =⃗b?

We know thatA⃗x =⃗b cannothave exactly one solu on; if it did, then by our the- orem it would be inver ble. Recalling that linear equa ons have either one solu on, infinite solu ons, or no solu on, we are le with the la er op ons whenAis not in- ver ble. This idea is important and so we’ll state it again as a Key Idea.

2.7 Proper es of the Matrix Inverse

..Key Idea 11

.

.

Solu ons toA⃗x=⃗band the Inver bility ofA Consider the system of linear equa onsA⃗x =⃗b.

1. IfAis inver ble, thenA⃗x =⃗bhas exactly one solu on, namelyA1⃗b.

2. IfAis not inver ble, thenA⃗x =⃗bhas either infinite solu ons or no solu on.

In Theorem 9 we’ve come up with a list of ways in which we can tell whether or not a matrix is inver ble. At the same me, we have come up with a list of proper es of inver ble matrices – things we know that are true about them. (For instance, if we know thatAis inver ble, then we know thatA⃗x =⃗bhas only one solu on.)

We now go on to discover other proper es of inver ble matrices. Specifically, we want to find out how inver bility interacts with other matrix opera ons. For instance, if we know thatAandBare inver ble, what is the inverse ofA+B? What is the inverse ofAB? What is “the inverse of the inverse?” We’ll explore these ques ons through an example.

.. Example 58 ..Let A=

[3 2 0 1 ]

andB=

[2 0

1 1

] .

Find:

1. A1 2. B1

3. (AB)1 4. (A1)1

5. (A+B)1 6. (5A)1 In addi on, try to find connec ons between each of the above.

S

1. Compu ngA1is straigh orward; we’ll use Theorem 7.

A1= 1 3

[1 2

0 3

]

=

[1/3 2/3

0 1

]

2. We computeB1in the same way as above.

B1= 1

2

[ 1 0

1 2 ]

=

[1/2 0

1/2 1

]

3. To compute(AB)1, we first computeAB:

AB= [3 2

0 1

] [2 0

1 1

]

=

[4 2

1 1

]

115

We now apply Theorem 7 to find(AB)1. (AB)1= 1

6

[ 1 2

1 4 ]

=

[1/6 1/3 1/6 2/3 ]

4. To compute(A1)1, we simply apply Theorem 7 toA1: (A1)1 = 1

1/3

[1 2/3 0 1/3 ]

= [3 2

0 1 ]

.

5. To compute(A+B)1, we first computeA+Bthen apply Theorem 7:

A+B= [3 2

0 1 ]

+

[2 0

1 1

]

= [1 2

1 2 ]

. Hence

(A+B)1= 1 0

[ 2 2

1 1 ]

= !

Our last expression is really nonsense; we know that ifad−bc =0, then the given matrix is not inver ble. That is the case withA+B, so we conclude that A+Bis not inver ble.

6. To compute(5A)1, we compute 5Aand then apply Theorem 7.

(5A)1=

([15 10

0 5

])1

= 1 75

[5 10

0 15

]

=

[1/15 2/15

0 1/5

]

We now look for connec ons betweenA1,B1,(AB)1,(A1)1and(A+B)1. ..

3. Is there some sort of rela onship between(AB)1andA1 andB1? A first guess that seems plausible is(AB)1 =A1B1. Is this true? Using our work from above, we have

A1B1=

[1/3 2/3

0 1

] [1/2 0

1/2 1

]

=

[1/2 2/3

1/2 1

] .

Obviously, this is not equal to(AB)1. Before we do some further guessing, let’s think about what the inverse ofABis supposed to do. The inverse – let’s call it C– is supposed to be a matrix such that

(AB)C=C(AB) =I.

In examining the expression(AB)C, we see that we wantBto somehow “cancel”

withC. What “cancels”B? An obvious answer isB1. This gives us a thought:

2.7 Proper es of the Matrix Inverse perhaps we got the order ofA1andB1wrong before. A er all, we were hop- ing to find that

ABA1B1 ?=I,

but algebraically speaking, it is hard to cancel out these terms.21 However, switching the order ofA1andB1gives us some hope. Is(AB)1 =B1A1? Let’s see.

(AB)(B1A1) =A(BB1)A1 (regrouping by the associa ve property)

=AIA1 (BB1=I)

=AA1 (AI=A)

=I (AA1=I)

Thus it seems that(AB)1 =B1A1. Let’s confirm this with our example ma- trices.

B1A1=

[1/2 0

1/2 1

] [1/3 2/3

0 1

]

=

[1/6 1/3 1/6 2/3 ]

= (AB)1.

It worked!

4. Is there some sort of connec on between(A1)1andA? The answer is pre y obvious: they are equal. The “inverse of the inverse” returns one to the original matrix.

5. Is there some sort of rela onship between(A+B)1,A1andB1? Certainly, if we were forced to make a guess without working any examples, we would guess that

(A+B)1 ?=A1+B1.

However, we saw that in our example, the matrix(A+B)isn’t even inver ble.

This pre y much kills any hope of a connec on.

6. Is there a connec on between(5A)1andA1? Consider:

(5A)1=

[1/15 2/15

0 1/5

]

=1 5

[1/3 2/3

0 1/5

]

=1 5A1 Yes, there is a connec on!

...

Let’s summarize the results of this example. IfAandBare both inver ble matrices, then so is their product,AB. We demonstrated this with our example, and there is

21Recall that matrix mul plica on is not commuta ve. 117

more to be said. Let’s suppose thatAandBaren×nmatrices, but we don’t yet know if they are inver ble. IfABis inver ble, then each ofAandBare; ifABis not inver ble, thenAorBis also not inver ble.

In short, inver bility “works well” with matrix mul plica on. However, we saw that it doesn’t work well with matrix addi on. Knowing thatAandBare inver ble does not help us find the inverse of(A+B); in fact, the la er matrix may not even be inver ble.22

Let’s do one more example, then we’ll summarize the results of this sec on in a theorem.

.. Example 59 Find the inverse ofA=

2 0 0

0 3 0

0 0 7

.

S We’ll findA1using Key Idea 10.

2 0 0 1 0 0

0 3 0 0 1 0

0 0 7 0 0 1

−→

rref

1 0 0 1/2 0 0

0 1 0 0 1/3 0

0 0 1 0 0 1/7

Therefore

A1=

1/2 0 0

0 1/3 0

0 0 1/7

.

..

The matrixAin the previous example is adiagonalmatrix: the only nonzero entries ofAlie on thediagonal.23The rela onship betweenAandA1in the above example seems pre y strong, and it holds true in general. We’ll state this and summarize the results of this sec on with the following theorem.

22The fact that inver bility works well with matrix mul plica on should not come as a surprise. A er all, saying thatAis inver ble makes a statement about the muli plica ve proper es ofA. It says that I can mul plyAwith a special matrix to getI. Inver bility, in and of itself, says nothing about matrix addi on, therefore we should not be too surprised that it doesn’t work well with it.

23We s ll haven’t formally defineddiagonal, but the defini on is rather visual so we risk it. See Defini on 20 on page 123 for more details.

2.7 Proper es of the Matrix Inverse

..Theorem 10

.

. Proper es of Inver ble Matrices

LetAandBben×ninver ble matrices. Then:

1. ABis inver ble;(AB)1=B1A1. 2. A1is inver ble;(A1)1=A.

3. nAis inver ble for any nonzero scalarn;(nA)1 =

1 nA1.

4. If A is a diagonal matrix, with diagonal entries d1,d2,· · ·,dn, where none of the diagonal en- tries are 0, then A1 exists and is a diagonal ma- trix. Furthermore, the diagonal entries of A1 are 1/d1,1/d2,· · ·,1/dn.

Furthermore,

1. If a productABis not inver ble, thenAorBis not in- ver ble.

2. IfAorBare not inver ble, thenABis not inver ble.

We end this sec on with a comment about solving systems of equa ons “in real life.”24 Solving a systemA⃗x = ⃗bby compu ngA1⃗bseems pre y slick, so it would make sense that this is the way it is normally done. However, in prac ce, this is rarely done. There are two main reasons why this is the case.

First, compu ngA1andA1⃗bis “expensive” in the sense that it takes up a lot of compu ng me. Certainly, our calculators have no trouble dealing with the 3×3 cases we o en consider in this textbook, but in real life the matrices being considered are very large (as in, hundreds of thousand rows and columns). Compu ngA1 alone is rather imprac cal, and we waste a lot of me if we come to find out thatA1 does not exist. Even if we already know whatA1is, compu ngA1⃗bis computa onally expensive – Gaussian elimina on is faster.

Secondly, compu ng A1 using the method we’ve described o en gives rise to numerical roundoff errors. Even though computers o en do computa ons with an accuracy to more than 8 decimal places, a er thousands of computa ons, roundoffs

24Yes, real people do solve linear equa ons in real life. Not just mathema cians, but economists, engi- neers, and scien sts of all flavors regularly need to solve linear equa ons, and the matrices they use are o enhuge.

Most people see matrices at work without thinking about it. Digital pictures are simply “rectangular arrays” of numbers represen ng colors – they are matrices of colors. Many of the standard image processing opera ons involve matrix opera ons. The author’s wife has a “7 megapixel” camera which creates pictures that are 3072×2304 in size, giving over 7 million pixels, and that isn’t even considered a “large” picture these days.

119

can cause big errors. (A “small” 1,000×1,000 matrix has 1,000,000 entries! That’s a lot of places to have roundoff errors accumulate!) It is not unheard of to have a computer computeA1for a large matrix, and then immediately have it computeAA1 andnotget the iden ty matrix.25

Therefore, in real life, solu ons toA⃗x=⃗bare usually found using the methods we learned in Sec on 2.4. It turns out that even with all of our advances in mathema cs, it is hard to beat the basic method that Gauss introduced a long me ago.

Exercises 2.7

In Exercises 1 – 4, matricesAandBare given.

Compute(AB)1andB1A1. 1. A=

[ 1 2 1 1 ]

, B= [ 3 5

2 5 ]

2. A= [ 1 2

3 4 ]

, B= [ 7 1

2 1 ]

3. A= [ 2 5

3 8 ]

, B=

[ 1 1

1 4

]

4. A= [ 2 4

2 5 ]

, B= [ 2 2

6 5 ] In Exercises 5 – 8, a2×2matrixAis given.

ComputeA1and(A1)1using Theorem 7.

5. A=

[ 3 5 1 2

]

6. A= [ 3 5

2 4 ]

7. A= [ 2 7

1 3 ]

8. A= [ 9 0

7 9 ]

9. Find 2×2 matricesAandBthat are each inver ble, butA+Bis not.

10. Create a random 6×6 matrixA, then have a calculator or computer compute AA1. Was the iden ty matrix returned exactly? Comment on your results.

11. Use a calculator or computer to com- puteAA1, where

A=



1 2 3 4

1 4 9 16

1 8 27 64

1 16 81 256



.

Was the iden ty matrix returned ex- actly? Comment on your results.

25The result is usually very close, with the numbers on the diagonal close to 1 and the other entries near 0. But it isn’t exactly the iden ty matrix.

3 .

O M

In the previous chapter we learned about matrix arithme c: adding, subtrac ng, and mul plying matrices, finding inverses, and mul plying by scalars. In this chapter we learn about some opera ons that we performonmatrices. We can think of them as func ons: you input a matrix, and you get something back. One of these opera ons, the transpose, will return another matrix. With the other opera ons, the trace and the determinant, we input matrices and get numbers in return, an idea that is different than what we have seen before.

3.1 The Matrix Transpose

.

.

AS YOU READ. . . .

1. T/F: IfAis a 3×5 matrix, thenATwill be a 5×3 matrix.

2. Where are there zeros in an upper triangular matrix?

3. T/F: A matrix is symmetric if it doesn’t change when you take its transpose.

4. What is the transpose of the transpose ofA?

5. Give 2 other terms to describe symmetric matrices besides “interes ng.”

We jump right in with a defini on.

..Defini on 19

.

. Transpose

LetAbe anm×nmatrix. Thetranpsose of A, denotedAT, is then×mmatrix whose columns are the respec ve rows ofA.

Examples will make this defini on clear.

.. Example 60 Find the transpose ofA=

[1 2 3

4 5 6

] .

S Note thatAis a 2×3 matrix, soATwill be a 3×2 matrix. By the defini on, the first column ofATis the first row ofA; the second column ofATis the second row ofA. Therefore,

AT=

1 4 2 5 3 6

. ..

.. Example 61 Find the transpose of the following matrices.

A=

 7 2 9 1

2 1 3 0

5 3 0 11

B=

1 10 2

3 5 7

4 2 3

C=[

1 1 7 8 3]

S We find each transpose using the defini on without explana on.

Make note of the dimensions of the original matrix and the dimensions of its transpose.

AT=



7 2 5

2 1 3

9 3 0

1 0 11



BT=

 1 3 4

10 5 2

2 7 3

CT=





 1

1 7 8 3





 ..

No ce that with matrixB, when we took the transpose, thediagonaldid not change.

We can see what the diagonal is below where we rewriteBandBTwith the diagonal in bold. We’ll follow this by a defini on of what we mean by “the diagonal of a matrix,”

along with a few other related defini ons.

B=

1 10 2

3 –5 7

4 2 –3

BT=

1 3 4

10 –5 2

2 7 –3

It is probably pre y clear why we call those entries “the diagonal.” Here is the formal defini on.

3.1 The Matrix Transpose

..Defini on 20

.

. The Diagonal, a Diagonal Matrix, Triangular Matrices LetAbe anm×nmatrix. Thediagonal of Aconsists of the entriesa11,a22,. . . ofA.

Adiagonal matrix is an n ×n matrix in which the only nonzero entries lie on the diagonal.

Anupper (lower) triangularmatrix is a matrix in which any nonzero entries lie on or above (below) the diagonal.

.. Example 62 Consider the matricesA,B,CandI4, as well as their transposes, where

A=

1 2 3

0 4 5

0 0 6

B=

3 0 0

0 7 0

0 0 1

C=



1 2 3

0 4 5

0 0 6

0 0 0



.

Iden fy the diagonal of each matrix, and state whether each matrix is diagonal, upper triangular, lower triangular, or none of the above.

S We first compute the transpose of each matrix.

AT=

1 0 0 2 4 0 3 5 6

BT=

3 0 0

0 7 0

0 0 1

CT=

1 0 0 0

2 4 0 0

3 5 6 0

Note thatIT4=I4.

The diagonals ofAandATare the same, consis ng of the entries 1, 4 and 6. The diagonals ofBandBTare also the same, consis ng of the entries 3, 7 and1. Finally, the diagonals ofCandCTare the same, consis ng of the entries 1, 4 and 6.

The matrixAis upper triangular; the only nonzero entries lie on or above the diag- onal. Likewise,ATis lower triangular.

The matrixBis diagonal. By their defini ons, we can also see thatBis both upper and lower triangular. Likewise,I4is diagonal, as well as upper and lower triangular.

Finally,Cis upper triangular, withCTbeing lower triangular. ..

Make note of the defini ons of diagonal and triangular matrices. We specify that a diagonal matrix must be square, but triangular matrices don’t have to be. (“Most”

of the me, however, the ones we study are.) Also, as we men oned before in the example, by defini on a diagonal matrix is also both upper and lower triangular. Fi- nally, no ce that by defini on, the transpose of an upper triangular matrix is a lower triangular matrix, and vice-versa.

123

There are many ques ons to probe concerning the transpose opera ons.1 The first set of ques ons we’ll inves gate involve the matrix arithme c we learned from last chapter. We do this inves ga on by way of examples, and then summarize what we have learned at the end.

.. Example 63 Let A=

[1 2 3

4 5 6

]

andB=

[1 2 1

3 1 0

] .

FindAT+BTand(A+B)T.

S We note that

AT=

1 4 2 5 3 6

 andBT=

1 3 2 1

1 0

.

Therefore

AT+BT=

1 4 2 5 3 6

+

1 3 2 1

1 0

=

2 7 4 4 4 6

.

Also,

(A+B)T=

([1 2 3 4 5 6 ]

+

[1 2 1

3 1 0

])T

=

([2 4 4 7 4 6

])T

=

2 7 4 4 4 6

. ..

It looks like “the sum of the transposes is the transpose of the sum.”2This should lead us to wonder how the transpose works with mul plica on.

.. Example 64 ..Let A=

[1 2 3 4 ]

andB=

[1 2 1

1 0 1

] .

1Remember, this is what mathema cians do. We learn something new, and then we ask lots of ques ons about it. O en the first ques ons we ask are along the lines of “How does this new thing relate to the old things I already know about?”

2This is kind of fun to say, especially when said fast. Regardless of how fast we say it, we should think

124

3.1 The Matrix Transpose Find(AB)T,ATBTandBTAT.

S We first note that

AT= [1 3

2 4 ]

andBT=

 1 1

2 0

1 1

.

Find(AB)T:

(AB)T=

([1 2 3 4

] [1 2 1

1 0 1

])T

=

([3 2 1

7 6 1

])T

=

3 7 2 6 1 1

Now findATBT:

ATBT= [1 3

2 4

]  1 1

2 0

1 1

=Not defined!

So we can’t computeATBT. Let’s finish by compu ngBTAT: BTAT=

 1 1

2 0

1 1

[ 1 3 2 4 ]

=

3 7 2 6 1 1

 ...

We may have suspected that(AB)T = ATBT. We saw that this wasn’t the case, though – and not only was it not equal, the second product wasn’t even defined! Oddly enough, though, we saw that(AB)T = BTAT. 3 To help understand why this is true, look back at the work above and confirm the steps of each mul plica on.

We have one more arithme c opera on to look at: the inverse.

.. Example 65 ..Let

A= [2 7

1 4 ]

.

3Then again, maybe this isn’t all that “odd.” It is reminiscent of the fact that, when inver ble,(AB)1= B1A1.

125

Dalam dokumen Fundamentals of Matrix Algebra 3rd Edition (Halaman 122-141)