• Tidak ada hasil yang ditemukan

Gaussian Elimination. The Triangular Decomposition of a Matrix

Dalam dokumen R. Bartels, W. Gautschi, and C. Witzgall (Halaman 169-179)

Systems of Linear Equations

4.1 Gaussian Elimination. The Triangular Decomposition of a Matrix

In the method of Gaussian elimination for solving a system of linear equations

(4.1.1) Ax = b,

159

where A is an n x n matrix and b E [Rn, the given system (4.1.1) is trans- formed in steps by appropriate rearrangements and linear combinations of equations into a system of the form

Rx = c, [

I'll

R=

o

which has the same solution as (4.1.1). R is an upper triangular matrix, so that Rx = c can easily be solved by .. back substitution" (so long as rii

+-

0,

i = 1, ... , 11):

In the first step of the algorithm an appropriate multiple of the first equation is subtracted from all of the other equations in such a way that the coefficients of x 1 vanish in these equations; hence, Xl remains only in the first equation. This is possible only if all

+-

0, of course, which can be achieved by rearranging the equations if necessary, as long as at least one

ail

+-

O. Instead of working with the equations themselves, the operations are carried out on the matrix

[all (A,b)= :

anl

which corresponds to the full system given in (4.1.1). The first step of the Gaussian elimination process leads to a matrix (A', b') of the form

[

a~,

a~2 ... a'ln b',

I

(4.1.2) (A', b') = a~2 ... a~n h~ . '

a~2 ann , b' n

and this step can be described formally as follows:

(4.1.3)

(a) Determine an element arl =f. 0 and proceed with (b);

if

no such I' exists, A is singular; set (A', b'):= (A, b); stop.

(b) Interchange rows I' and 1 of (A, b). The result is the matrix (if,

5).

(c) For i = 2, 3, ... , 11, subtract the multiple

of row 1 from row i of the matrix (if, ti). The desired matrix (A', b') is obtained as the result.

The transition (A, b) ---> (Jr, b) ---> (A', b') can be described by using matrix multiplications:

(4.1.4)

where P I is a permutation matrix, and G I is a lower triangular matrix:

(4.1.5)

o o

P :=

I 1 1

+--r,

o

o

1

Matrices such as G I, which differ in at most one column from an identity matrix, are called Frobenius matrices. Both matrices P I and G I are nonsingu- lar; in fact

o

For this reason, the equation systems Ax = band A'x = b' have the same solution: Ax = b implies GI PI Ax = A'x = b' = GI P! b, and A'x = b' implies Pi'Gi! A'x = Ax = b = Pi'Gi'b'.

The element arl =

all

which is determined in (a) is called the pivot ele- ment (or simply the pivot), and step (a) itself is called pivot selection (or pivoting). In the pivot selection one can, in theory, choose any arl =f 0 as the pivot element. For reasons of numerical stability (see Section 4.5) it is not recommended that an arbitrary arl =f 0 be chosen. Usually the choice

! arl! = max! ail!

is made; that is, among all candidate elements the one of largest absolute value is selected. (It is assumed in making this choice however-see Section 4.5-that the matrix A is "equilibrated ", that is, that the orders of magni- tudes of the elements of A are" roughly equal ".) This sort of pivot selection is called partial pivot selection (or partial pivoting), in contrast to complete pivot selection (or complete pivoting), in which the search for a pivot is not restricted to the first column; that is, (a) and (b) in (4.1.3) are replaced by (a') and (b'):

(a') Determine rand s so that

! ars ! = max

I

aij

I

i. j

and continue with (b')

if

a,s i= O. Otherwise A is singular .. set (A', b') :=

(A, b) .. stop.

(b') Interchange rows 1 and r of (A, b), as well as columns 1 and s. Let the resulting matrix be (A,

6).

After the first elimination step, the resulting matrix has the form (4.1.2):

[

a~ 1:

a'T :

b'l]

(A', b')

= - - - i ---i -- 0: A:b

with an (n - 1 )-row matrix

A.

The next elimination step consists simply of applying the process described in (4.1.3) for (A, b) to the smaller matrix (A, b). Carrying on in this fashion, a sequence of matrices

(A, b) =: (A(O), yO») ~ (A(1), b(1») ~ ... ~ (A(n-1), b(n-1)) =: (R, c) is obtained which begins with the given matrix (A, b) (4.1.1) and ends with the desired matrix (R, c). In this sequence the jth intermediate matrix (AU>, b(j)) has the form

* .... * *1*

1

* *

o * : :

1 • : 1

o

0

*:* * *

(4.1.6) ___________ J ________ _

o 0:*

• 1 : 1

* *

: 1

o 0: * * *

1

with a j-row upper triangular matrix AY~. The transition (A(j), b(j») ~

(A(j+ 1), b(jH») consists of the application of (4.1.3) on the (n - j) x (n - j

+

1) matrix (AYL bY»). The elements of

AYL AYL bY)

do not change from this step on; hence they agree with the corresponding elements of (R, c). As in the first step, (4.1.4) and (4.1.5), the ensuing steps can be described using matrix multiplication. As can be readily seen

(A (j) b(j»)

=

P .(A (j -1) b(j -1))

, J J ' ,

(R, c)

=

Gn - 1 Pn- 1 Gn-2Pn-2 ... G1 P1(A, b), (4.1.7)

with permutation matrices Pj and nonsingular Frobenius matrices Gj of the form

1 0

(4.1.8) Gj = 1

-lj+1•j 1

0 -I . n,) 0 1

In the jth elimination step (AU-I), bU- 1») --+ (A(j), b(j)) the elements below the diagonal in the jth column are anihilated. In the implementation of this algorithm on a computer, the locations which were occupied by these ele- ments can now be used for the storage of the important quantities lij' i ~ j

+

1, of Gj ; that is, we work with a matrix of the form

rll r12 r1i : rl,i+l , r1n Cl

- - --"I :

A21 : r22 r 2j 1

1 :

~---I 1

Ti)

= A31 A32: :

---: rji : ri.j+l rj,n Cj 1 _____ .1 ____________________ _

1 U) (i) (j)

A'+ l ) •• ) 1 .1 a'+ l ) .') '+1 a)'+l,n b). + 1

an. i+ (j) 1 a(j) n.n

Here the subdiagonal elements Ak+ 1, k , Ak+ 2. k' ... , Ank of the kth column are a certain permutation of the elements lk+ 1, k> ... , In. k of Gk in (4.1.8).

Based on this arrangement, the jth step T(j-l) --+ ,[<i), j

=

1,2, ... , n - 1, can be described as follows (for simplicity the elements of ,[<i-I) are denoted by tik' and those of T(j) by t;k, 1 ~ i ~ n, 1 ~ k ~ n

+

1):

(a) Partial pivot selection: Determine r so that

I

tril

=

max

I

tijl·

i~j

If trj

=

0, set '[<j):= ,[<i-I); A is singular; stop. Otherwise carryon with (b).

(b) Interchange rows r and j of '[<j-l), and denote the result by f

=

(t;k)' (c) Replace

t;i := lij := t;/0i for i

=

j

+

1, j

+

2, ... , n,

t;k := t;k - lij0k for i

=

j

+

1, ... , n and k

=

j

+

1, ... , n

+

1,

t;k := t;k otherwise.

We note that in (c) the important elements li+ 1. j ' ... , lni of Gj are stored in their natural order as tj+ 1, i' ... , t~i' This order may, however, be changed in the subsequent elimination steps '[<k) --+ '[<k+ 1), k ~ j, because in (b) the rows of the entire matrix '[<k) are rearranged. This has the following effect:

The lower triangular matrix L and the upper triangular matrix R,

L=' t21

°

R=

tn. n-l ' '1

°

which are contained in the final matrix

T.n-

1 ) = (tik ), provide a triangular decomposition of the matrix P A:

(4.1.9) LR

=

PA.

In this decomposition P is the product of all of the permutations appearing in (4.1.7):

We will only show here that a triangular decomposition is produced ifno row interchanges are necessary during the course of the elimination process, i.e., if P1

= ... =

Pn - 1

=

P

=

I. In this case,

since in all of the minor steps (b) nothing is interchanged. Now, because of (4.1.7),

therefore (4.1.10) Since

G:-} 1= 1

it is easily verified that

121 ...

-1 -1 [

1 G1 ... Gn - 1 =

L

Then the assertion follows from (4.1.10).

o

1

EXAMPLE.

[~

1 1

n [::] [:l

r

2 1 1 1

T]

317 1 : 4 I I ~

~ t 1 CD

)" 1

-I!",

-

6' 2]

1 I T I 10

~

3 -1

-t: 6 ' 2] i'i' .

4

The pivot elements are marked. The triangular equation system is

[~ i J [::] rH

Its solution is

X3 = -8,

X2 =

i(¥

+ X3) = -7,

Xl = 1(2 - X2 - 6X3) = 19.

Further

[ 1 0 0]

P= 0 0 1 , 010

[3 1 6]

PA = 1 1 1 , 2 1 3 and the matrix P A has the triangular decomposition P A = LR with

[1 0 0]

L=

!

1 0 ,

~

t

1

R=

[~ ~ -~].

o 0

-t

Triangular decompositions (4.1.9) are of great practical importance in solving systems of linear equations. If the decomposition (4.1.9) is known for a matrix A (that is, the tnatHces L, R, P are known), then the equation system

Ax= b

can be solved immediately with any right-hand side b; for it follows that PAx

=

LRx

=

Pb,

from which x can be found by solving both of the triangular systems

Lu= Pb, Rx=u

(provided all rii =1= O).

Thus, with the help of the Gaussian elimination algorithm, it can be shown constructively that each square nonsingular matrix A has a triangular decomposition of the form (4.1.9). However, not every such matrix A has a triangular decomposition in the more narrow sense A = LR, as the example

shows. In general, the rows of A must be permuted appropriately at the outset.

The triangular decomposition (4.1.9) can be obtained directly without forming the intermediate matrices Tj). For simplicity, we will show this under the assumption that the rows of A do not have to be permuted in order for a triangular decomposition A = LR to exisf The equations A = LR are regarded as /1 2 defining equations for the /1 2 unknown quantities

that is, (4.1.11)

min(i. k) a ik =

L

lij r jk

j=l

(lii= 1);

The order in which the' lij' r jk are to be computed remains open. The following versions are common:

In the Crout method the 11 x 11 matrix A = LR is partitioned as follows:

1 3 5

2 4

6W

and the equations A = LR are solved for Land R in an order indicated by this partitioning:

1

(1) ali =

L

11j r ji , r li :=ali' i = 1,2, ... , /1 j= 1

1

(2) ail =

L

lijrjl' lil :=ail :=a i l/r 1l , i = 2, 3, ... , /1

j= 1

(3) a2i =

L

2 12/ ji , r 2i := a2i - 121 r 1i , i = 2, 3, ... ,11, etc.

j= 1

And in general, for i = 1, 2, ... , n,

(4.1.12)

i-1

rik := aik -

L

lijrjk

j=1

k = i, i

+

1, ... , n, k = i

+

1, i

+

2, ... , n.

In all of the steps above Iii = 1 for i = 1, 2, ... , n.

In the Banachiewicz method, the partitioning 1

2

I

3

4

I

5

6 I 7 8

I

9

is used; that is, Land R are computed by rows.

The formulas above are valid only if no pivot selection is carried out.

Triangular decomposition by the methods of Crout or Banachiewicz with pivot selection leads to more complicated algorithms; see Wilkinson (1965).

Gaussian elimination and direct triangular decomposition differ only in the ordering of operations. Both algorithms are, theoretically and numer- ically, entirely equivalent. Indeed, the jth partial sums

j

(4.1.13) all) := aik -

L

lis rsk

s=1

of (4.1.12) produce precisely the elements of the matrix A(j) in (4.1.6), as can easily be verified. In Gaussian elimination, therefore, the scalar products (4.1.12) are formed only in pieces, with temporary storing of the intermediate results; direct triangular decomposition, on the other hand, forms each scalar product as a whole. For these organizational reasons, direct triangu- lar decomposition must be preferred if one chooses to accumulate the scalar products in double-precision arithmetic in order to reduce roundoff errors (without storing double-precision intermediate results). Further, these methods of triangular decomposition require about n3

/3

operations (1 operation = 1 multiplication

+

1 addition). Thus, they also offer a simple way of evaluating the determinant of a matrix A: From (4.1.9) it follows, since det(P) = ± 1, det(L} = 1, that

det(PA) = ±det(A) = det(R) = r11r22 ... rnn .

Up to its sign, det(A} is exactly the product of the pivot elements. (It should be noted that the direct evaluation of the formula

n

det(A) =

L

sign(,u1' ... , ,un} a11'I a 21'2 '" anI'.

1'1 •...• 1'.= 1 I'd I'k for i

*k

requires n! ~ n3

/3

operations.}

In the case that P = J, the pivot elements rii are representable as quotients of the determinants of the principal minors of A. If, in the representation LR

=

A, the matrices are partitioned as follows:

it is found that LllRll

=

All; hence det(R ll )

=

det(A ll ), or r 11 ... rii = det(A 11)'

where A 11 is an i x i matrix. In general, if Ai denotes the ith principal minor of A, then

rii

=

det(AJ/det(A i-1 ), i? 2, r11 = det(Ad·

A further practical and important property of the method of triangular decomposition is that, for band matrices with bandwidth m,

* *

A=

*

aij = 0 for

I

i - j

I

? m,

*

m

* *

m

the matrices Land R of the decomposition LR

=

PA of A are not full: R is a band matrix with bandwidth

2m -

1,

* * o

R=

o

* * 12m -

l'

and in each column of L there are at most m elements different from zero. In contrast, the inverses A -1 of band matrices are usually filled with nonzero entries.

Thus, if m ~ n, using the triangular decomposition of A to solve Ax

=

b

results in a considerable saving in computation and storage over using A -1.

Additional savings are possible by making use of the symmetry of A if A is a positive definite matrix (see Section 4.3).

Dalam dokumen R. Bartels, W. Gautschi, and C. Witzgall (Halaman 169-179)