• Tidak ada hasil yang ditemukan

For Reference

Dalam dokumen Essential Conversion Factors for Engineering (Halaman 150-156)

Case III. Complex conjugate roots are of minor practical importance, and we discuss the derivation of real solutions from complex ones just in terms of a typical example

Case 2. Damped Forced Oscillations

4.0 For Reference

124

C H A P T E R 4

Systems of ODEs. Phase Plane.

Qualitative Methods

Tying in with Chap. 3, we present another method of solving higher order ODEs in Sec. 4.1. This converts any nth-order ODE into a system of nfirst-order ODEs. We also show some applications. Moreover, in the same section we solve systems of first-order ODEs that occur directly in applications, that is, not derived from an nth-order ODE but dictated by the application such as two tanks in mixing problems and two circuits in electrical networks. (The elementary aspects of vectors and matrices needed in this chapter are reviewed in Sec. 4.0 and are probably familiar to most students.)

In Sec. 4.3 we introduce a totally different way of looking at systems of ODEs. The method consists of examining the general behavior of whole families of solutions of ODEs in the phase plane, and aptly is called the phase plane method. It gives information on the stability of solutions. (Stability of a physical systemis desirable and means roughly that a small change at some instant causes only a small change in the behavior of the system at later times.) This approach to systems of ODEs is a qualitative methodbecause it depends only on the nature of the ODEs and does not require the actual solutions. This can be very useful because it is often difficult or even impossible to solve systems of ODEs. In contrast, the approach of actually solving a system is known as a quantitative method.

The phase plane method has many applications in control theory, circuit theory, population dynamics and so on. Its use in linear systems is discussed in Secs. 4.3, 4.4, and 4.6 and its even more important use in nonlinear systems is discussed in Sec. 4.5 with applications to the pendulum equation and the Lokta–Volterra population model. The chapter closes with a discussion of nonhomogeneous linear systems of ODEs.

N O T A T I O N . We continue to denote unknown functions by y; thus, — analogous to Chaps. 1–3. (Note that some authors use xfor functions, when dealing with systems of ODEs.)

Prerequisite: Chap. 2.

References and Answers to Problems: App. 1 Part A, and App. 2.

with these facts. Thus this section is for reference only.Begin with Sec. 4.1 and consult 4.0 as needed.

Most of our linear systems will consist of two linear ODEs in two unknown functions ,

(1)

(perhaps with additional givenfunctions on the right in the two ODEs).

Similarly, a linear system of nfirst-order ODEs in nunknown functions is of the form

(2)

(perhaps with an additional given function on the right in each ODE).

Some Definitions and Terms

Matrices. In (1) the (constant or variable) coefficients form a 2 2 matrix A, that is, an array

(3) , for example, .

Similarly, the coefficients in (2) form an n n matrix

(4)

The are called entries, the horizontal lines rows, and the vertical lines columns.

Thus, in (3) the first row is , the second row is , and the first and second columns are

and .

In the “double subscript notation” for entries, the first subscript denotes the rowand the second the columnin which the entry stands. Similarly in (4). The main diagonalis the diagonal a11 a22 Á annin (4), hence a11 a22in (3).

c

aa12

22

d c

aa11

21

d

[a21 a22] [a11 a12]

a11, a12

A⫽[ajk]⫽E

a11 a12 Á a1n a21 a22 Á a2n

# # Á #

an1 an2 Á ann U.

A

c

5 2

13 12

d

A⫽[ajk]⫽

c

a11 a12

a21 a22

d

y1

r

a11y1a12y2 Á a1nyn

y2

r

a21y1a22y2 Á a2nyn

. . . yn

r

an1y1an2y2 Á annyn

y1(t),Á , yn(t) g1(t), g2(t)

y1

r

a11y1a12y2, y1

r

⫽ ⫺5y12y2

for example,

y2

r

a21y1a22y2, y2

r

13y11

2y2

y1(t), y2(t)

SEC. 4.0 For Reference: Basics of Matrices and Vectors 125

We shall need only square matrices, that is, matrices with the same number of rows and columns, as in (3) and (4).

Vectors. A column vector xwith ncomponents is of the form

thus if .

Similarly, a row vector v is of the form

, thus if , then .

Calculations with Matrices and Vectors

Equality. Two n n matrices are equal if and only if corresponding entries are equal.

Thus for , let

and .

Then A Bif and only if

.

Two column vectors (or two row vectors) are equal if and only if they both have n components and corresponding components are equal. Thus, let

. Then if and only if

Additionis performed by adding corresponding entries (or components); here, matrices must both be n n, and vectors must both have the same number of components. Thus for ,

(5) .

Scalar multiplication(multiplication by a number c) is performed by multiplying each entry (or component) by c. For example, if

A

c

9 3

⫺2 0

d

, then 7A

c

6321

14 0

d

.

AB

c

a11b11 a12b12

a21b21 a22b22

d

, vx

c

v1x1

v2x2

d

n⫽2

v1x1 v2x2. vx

v

c

v1

v2

d

and x

c

x1

x2

d

a21b21, a22b22

a11b11, a12b12

B

c

b11 b12

b21 b22

d

A

c

a11 a12

a21 a22

d

n⫽2

v⫽[v1 v2] n⫽2

v⫽[v1 Á vn]

x

c

x1

x2

d

n⫽2, x⫽E

x1

x2

o

xn

U,

x1,Á , xn

If

.

Matrix Multiplication. The product (in this order) of two n n matrices is the n nmatrix with entries

(6)

that is, multiply each entry in the jth rowof Aby the corresponding entry in the kth column of Band then add these nproducts. One says briefly that this is a “multiplication of rows into columns.” For example,

CAUTION! Matrix multiplication is not commutative, in general. In our example,

Multiplication of an n nmatrix Aby a vector xwith ncomponents is defined by the same rule: is the vector with the ncomponents

. For example,

Systems of ODEs as Vector Equations

Differentiation. The derivative of a matrix (or vector) with variable entries (or components) is obtained by differentiating each entry (or component). Thus, if

. y(t)⫽

c

y1(t)

y2(t)

d

c

eⴚ2t

sin t

d

, then y

r

(t)

c

y1

r

(t)

y2

r

(t)

d

c

2eⴚ2t

cos t

d

c

12 7

⫺8 3

d c

x1

x2

d

c

12x17x2

⫺8x1⫹3x2

d

.

j⫽1,Á, n vj ⫽ a

n

m⫽1

ajmxm vAx

c

17 3

8 6

d

.

c

1 4

2 5

d c

9 3

⫺2 0

d

c

19(4)(2) 13(4)0

2ⴢ9⫹5ⴢ(⫺2) 2ⴢ3⫹5ⴢ0

d

ABBA

c

15 21

⫺2 8

d

.

c

9 3

⫺2 0

d c

1 4

2 5

d

c

9132 9(4)35

⫺2ⴢ1⫹0ⴢ2 (⫺2)ⴢ(⫺4)⫹0ⴢ5

d

,

j⫽1,Á, n k⫽1,Á, n, cjk ⫽ a

n

m1

ajmbmk

C⫽[cjk]

A⫽[ajk] and B⫽[bjk]

CAB

v

c

0.4

⫺13

d

, then 10v

c

4

⫺130

d

SEC. 4.0 For Reference: Basics of Matrices and Vectors 127

Using matrix multiplication and differentiation, we can now write (1) as

(7) .

Similarly for (2) by means of an n nmatrix A and a column vector ywith ncomponents, namely, . The vector equation (7) is equivalent to two equations for the components, and these are precisely the two ODEs in (1).

Some Further Operations and Terms

Transpositionis the operation of writing columns as rows and conversely and is indicated by T. Thus the transpose of the 2 2 matrix

is .

The transpose of a column vector, say,

, is a row vector, ,

and conversely.

Inverse of a Matrix. The n nunit matrix Iis the n n matrix with main diagonal and all other entries zero. If, for a given n nmatrix A, there is an n n matrix Bsuch that , thenAis called nonsingular and Bis called the inverse ofA and is denoted by ; thus

(8) .

The inverse exists if the determinant det A of A is not zero.

If Ahas no inverse, it is called singular. For , (9)

where the determinantof A is

(10) .

(For general n, see Sec. 7.7, but this will not be needed in this chapter.)

Linear Independence. r given vectors with n components are called a linearly independent setor, more briefly, linearly independent, if

(11) c1v(1)⫹ Á ⫹crv(r)0

v(1),Á, v(r) det A⫽2a11 a12

a21 a222a11a22a12a21 Aⴚ1⫽ 1

det A

c

a22 a12

a21 a11

d

,

n⫽2 AAⴚ1Aⴚ1AI Aⴚ1

ABBAI

⫻ 1, 1,Á, 1

vT⫽[v1 v2] v

c

v1

v2

d

AT

c

a11 a21

a12 a22

d

c

5 13

2 12

d

A

c

a11 a12

a21 a22

d

c

5 2

13 12

d

AT

y

r

Ay

y

r

c

y1

r

y2

r d

Ay

c

a11 a12

a21 a22

d c

y1

y2

d

, e.g., y

r

c

5 2

13 12

d c

y1

y2

d

implies that all scalars must be zero; here, 0denotes the zero vector, whose n components are all zero. If (11) also holds for scalars not all zero (so that at least one of these scalars is not zero), then these vectors are called a linearly dependent setor, briefly, linearly dependent, because then at least one of them can be expressed as a linear combinationof the others; that is, if, for instance, in (11), then we can obtain

Eigenvalues, Eigenvectors

Eigenvalues and eigenvectors will be very important in this chapter (and, as a matter of fact, throughout mathematics).

Let be an n nmatrix. Consider the equation (12)

where is a scalar (a real or complex number) to be determined and xis a vector to be determined. Now, for every , a solution is . A scalar such that (12) holds for some vector is called an eigenvalue of A, and this vector is called an eigenvector of Acorresponding to this eigenvalue .

We can write (12) as or

(13) .

These are nlinear algebraic equations in the nunknowns (the components of x). For these equations to have a solution , the determinant of the coefficient matrix must be zero. This is proved as a basic fact in linear algebra (Theorem 4 in Sec. 7.7). In this chapter we need this only for . Then (13) is

(14) ;

in components,

Now is singular if and only if its determinant , called the characteristic determinant of A(also for general n), is zero. This gives

(15)

⫽l2⫺(a11a22)l⫹a11a22a12a21⫽0.

⫽(a11⫺l)(a22⫺l)⫺a12a21 det (A⫺lI)⫽2a11l a12

a21 a22⫺l2

det (A⫺lI) A⫺lI

a21x1 ⫹(a22⫺l)x2 ⫽0.

(a11⫺l)x1a12x2 ⫽0 (14*)

c

a11l a12

a21 a22⫺l

d c

x1

x2

d

c

0

0

d

n⫽2 A⫺lI

x0

x1,Á, xn (A⫺lI)x0

Ax⫺lx0 l x0

l x0

l l

Ax⫽lx

A⫽[ajk]

v(1)⫽ ⫺ 1

c1 (c2v(2)⫹ Á ⫹crv(r)).

c1⫽0 c1

, cr

SEC. 4.0 For Reference: Basics of Matrices and Vectors 129

This quadratic equation in is called the characteristic equation of A. Its solutions are the eigenvalues of A. First determine these. Then use with to determine an eigenvector of Acorresponding to . Finally use with to find an eigenvector of Acorresponding to . Note that if xis an eigenvector of A, so is kxwith any .

E X A M P L E 1 Eigenvalue Problem

Find the eigenvalues and eigenvectors of the matrix (16)

Solution. The characteristic equation is the quadratic equation

. It has the solutions . These are the eigenvalues of A.

Eigenvectors are obtained from . For we have from

A solution of the first equation is . This also satisfies the second equation. (Why?) Hence an eigenvector of Acorresponding to is

(17) . Similarly,

is an eigenvector of Acorresponding to , as obtained from with . Verify this.

4.1 Systems of ODEs as Models

Dalam dokumen Essential Conversion Factors for Engineering (Halaman 150-156)