Case III. Complex conjugate roots are of minor practical importance, and we discuss the derivation of real solutions from complex ones just in terms of a typical example
Case 2. Damped Forced Oscillations
4.0 For Reference
124
C H A P T E R 4
Systems of ODEs. Phase Plane.
Qualitative Methods
Tying in with Chap. 3, we present another method of solving higher order ODEs in Sec. 4.1. This converts any nth-order ODE into a system of nfirst-order ODEs. We also show some applications. Moreover, in the same section we solve systems of first-order ODEs that occur directly in applications, that is, not derived from an nth-order ODE but dictated by the application such as two tanks in mixing problems and two circuits in electrical networks. (The elementary aspects of vectors and matrices needed in this chapter are reviewed in Sec. 4.0 and are probably familiar to most students.)
In Sec. 4.3 we introduce a totally different way of looking at systems of ODEs. The method consists of examining the general behavior of whole families of solutions of ODEs in the phase plane, and aptly is called the phase plane method. It gives information on the stability of solutions. (Stability of a physical systemis desirable and means roughly that a small change at some instant causes only a small change in the behavior of the system at later times.) This approach to systems of ODEs is a qualitative methodbecause it depends only on the nature of the ODEs and does not require the actual solutions. This can be very useful because it is often difficult or even impossible to solve systems of ODEs. In contrast, the approach of actually solving a system is known as a quantitative method.
The phase plane method has many applications in control theory, circuit theory, population dynamics and so on. Its use in linear systems is discussed in Secs. 4.3, 4.4, and 4.6 and its even more important use in nonlinear systems is discussed in Sec. 4.5 with applications to the pendulum equation and the Lokta–Volterra population model. The chapter closes with a discussion of nonhomogeneous linear systems of ODEs.
N O T A T I O N . We continue to denote unknown functions by y; thus, — analogous to Chaps. 1–3. (Note that some authors use xfor functions, when dealing with systems of ODEs.)
Prerequisite: Chap. 2.
References and Answers to Problems: App. 1 Part A, and App. 2.
with these facts. Thus this section is for reference only.Begin with Sec. 4.1 and consult 4.0 as needed.
Most of our linear systems will consist of two linear ODEs in two unknown functions ,
(1)
(perhaps with additional givenfunctions on the right in the two ODEs).
Similarly, a linear system of nfirst-order ODEs in nunknown functions is of the form
(2)
(perhaps with an additional given function on the right in each ODE).
Some Definitions and Terms
Matrices. In (1) the (constant or variable) coefficients form a 2 2 matrix A, that is, an array
(3) , for example, .
Similarly, the coefficients in (2) form an n n matrix
(4)
The are called entries, the horizontal lines rows, and the vertical lines columns.
Thus, in (3) the first row is , the second row is , and the first and second columns are
and .
In the “double subscript notation” for entries, the first subscript denotes the rowand the second the columnin which the entry stands. Similarly in (4). The main diagonalis the diagonal a11 a22 Á annin (4), hence a11 a22in (3).
c
aa1222
d c
aa1121
d
[a21 a22] [a11 a12]
a11, a12,Á
A⫽[ajk]⫽E
a11 a12 Á a1n a21 a22 Á a2n
# # Á #
an1 an2 Á ann U. ⴛ
A⫽
c
⫺5 213 12
d
A⫽[ajk]⫽
c
a11 a12a21 a22
d
ⴛ y1
r
⫽a11y1⫹a12y2⫹ Á ⫹a1nyny2
r
⫽a21y1⫹a22y2⫹ Á ⫹a2nyn. . . yn
r
⫽an1y1⫹an2y2⫹ Á ⫹annyny1(t),Á , yn(t) g1(t), g2(t)
y1
r
⫽a11y1⫹a12y2, y1r
⫽ ⫺5y1⫹2y2for example,
y2
r
⫽a21y1⫹a22y2, y2r
⫽ 13y1⫹12y2
y1(t), y2(t)
SEC. 4.0 For Reference: Basics of Matrices and Vectors 125
We shall need only square matrices, that is, matrices with the same number of rows and columns, as in (3) and (4).
Vectors. A column vector xwith ncomponents is of the form
thus if .
Similarly, a row vector v is of the form
, thus if , then .
Calculations with Matrices and Vectors
Equality. Two n n matrices are equal if and only if corresponding entries are equal.
Thus for , let
and .
Then A Bif and only if
.
Two column vectors (or two row vectors) are equal if and only if they both have n components and corresponding components are equal. Thus, let
. Then if and only if
Additionis performed by adding corresponding entries (or components); here, matrices must both be n n, and vectors must both have the same number of components. Thus for ,
(5) .
Scalar multiplication(multiplication by a number c) is performed by multiplying each entry (or component) by c. For example, if
A⫽
c
9 3⫺2 0
d
, then ⫺7A⫽c
⫺63⫺2114 0
d
.A⫹B⫽
c
a11⫹b11 a12⫹b12a21⫹b21 a22⫹b22
d
, v⫹x⫽c
v1⫹x1v2⫹x2
d
n⫽2
⫻
v1⫽x1 v2⫽x2. v⫽x
v⫽
c
v1v2
d
and x⫽c
x1x2
d
a21⫽b21, a22⫽b22
a11⫽b11, a12⫽b12
⫽
B⫽
c
b11 b12b21 b22
d
A⫽
c
a11 a12a21 a22
d
n⫽2
⫻
v⫽[v1 v2] n⫽2
v⫽[v1 Á vn]
x⫽
c
x1x2
d
n⫽2, x⫽E
x1
x2
o
xn
U,
x1,Á , xn
If
.
Matrix Multiplication. The product (in this order) of two n n matrices is the n nmatrix with entries
(6)
that is, multiply each entry in the jth rowof Aby the corresponding entry in the kth column of Band then add these nproducts. One says briefly that this is a “multiplication of rows into columns.” For example,
CAUTION! Matrix multiplication is not commutative, in general. In our example,
Multiplication of an n nmatrix Aby a vector xwith ncomponents is defined by the same rule: is the vector with the ncomponents
. For example,
Systems of ODEs as Vector Equations
Differentiation. The derivative of a matrix (or vector) with variable entries (or components) is obtained by differentiating each entry (or component). Thus, if
. y(t)⫽
c
y1(t)y2(t)
d
⫽c
eⴚ2tsin t
d
, then yr
(t)⫽c
y1r
(t)y2
r
(t)d
⫽c
⫺2eⴚ2tcos t
d
c
12 7⫺8 3
d c
x1x2
d
⫽c
12x1⫹7x2⫺8x1⫹3x2
d
.j⫽1,Á, n vj ⫽ a
n
m⫽1
ajmxm v⫽Ax
⫻
⫽
c
17 38 6
d
.c
1 ⫺42 5
d c
9 3⫺2 0
d
⫽c
1ⴢ9⫹(⫺4)ⴢ(⫺2) 1ⴢ3⫹(⫺4)ⴢ02ⴢ9⫹5ⴢ(⫺2) 2ⴢ3⫹5ⴢ0
d
AB⫽BA
⫽
c
15 ⫺21⫺2 8
d
.c
9 3⫺2 0
d c
1 ⫺42 5
d
⫽c
9ⴢ1⫹3ⴢ2 9ⴢ(⫺4)⫹3ⴢ5⫺2ⴢ1⫹0ⴢ2 (⫺2)ⴢ(⫺4)⫹0ⴢ5
d
,j⫽1,Á, n k⫽1,Á, n, cjk ⫽ a
n
m⫽1
ajmbmk
C⫽[cjk]
⫻ A⫽[ajk] and B⫽[bjk]
⫻ C⫽AB
v⫽
c
0.4⫺13
d
, then 10v⫽c
4⫺130
d
SEC. 4.0 For Reference: Basics of Matrices and Vectors 127
Using matrix multiplication and differentiation, we can now write (1) as
(7) .
Similarly for (2) by means of an n nmatrix A and a column vector ywith ncomponents, namely, . The vector equation (7) is equivalent to two equations for the components, and these are precisely the two ODEs in (1).
Some Further Operations and Terms
Transpositionis the operation of writing columns as rows and conversely and is indicated by T. Thus the transpose of the 2 2 matrix
is .
The transpose of a column vector, say,
, is a row vector, ,
and conversely.
Inverse of a Matrix. The n nunit matrix Iis the n n matrix with main diagonal and all other entries zero. If, for a given n nmatrix A, there is an n n matrix Bsuch that , thenAis called nonsingular and Bis called the inverse ofA and is denoted by ; thus
(8) .
The inverse exists if the determinant det A of A is not zero.
If Ahas no inverse, it is called singular. For , (9)
where the determinantof A is
(10) .
(For general n, see Sec. 7.7, but this will not be needed in this chapter.)
Linear Independence. r given vectors with n components are called a linearly independent setor, more briefly, linearly independent, if
(11) c1v(1)⫹ Á ⫹crv(r)⫽0
v(1),Á, v(r) det A⫽2a11 a12
a21 a222⫽a11a22⫺a12a21 Aⴚ1⫽ 1
det A
c
a22 ⫺a12⫺a21 a11
d
,n⫽2 AAⴚ1⫽Aⴚ1A⫽I Aⴚ1
AB⫽BA⫽I
⫻
⫻ 1, 1,Á, 1
⫻
⫻
vT⫽[v1 v2] v⫽
c
v1v2
d
AT⫽
c
a11 a21a12 a22
d
⫽c
⫺5 132 12
d
A⫽
c
a11 a12a21 a22
d
⫽c
⫺5 213 12
d
⫻ AT
y
r
⫽Ay⫻ y
r
⫽c
y1r
y2
r d
⫽Ay⫽c
a11 a12a21 a22
d c
y1y2
d
, e.g., yr
⫽c
⫺5 213 12
d c
y1y2
d
implies that all scalars must be zero; here, 0denotes the zero vector, whose n components are all zero. If (11) also holds for scalars not all zero (so that at least one of these scalars is not zero), then these vectors are called a linearly dependent setor, briefly, linearly dependent, because then at least one of them can be expressed as a linear combinationof the others; that is, if, for instance, in (11), then we can obtain
Eigenvalues, Eigenvectors
Eigenvalues and eigenvectors will be very important in this chapter (and, as a matter of fact, throughout mathematics).
Let be an n nmatrix. Consider the equation (12)
where is a scalar (a real or complex number) to be determined and xis a vector to be determined. Now, for every , a solution is . A scalar such that (12) holds for some vector is called an eigenvalue of A, and this vector is called an eigenvector of Acorresponding to this eigenvalue .
We can write (12) as or
(13) .
These are nlinear algebraic equations in the nunknowns (the components of x). For these equations to have a solution , the determinant of the coefficient matrix must be zero. This is proved as a basic fact in linear algebra (Theorem 4 in Sec. 7.7). In this chapter we need this only for . Then (13) is
(14) ;
in components,
Now is singular if and only if its determinant , called the characteristic determinant of A(also for general n), is zero. This gives
(15)
⫽l2⫺(a11⫹a22)l⫹a11a22⫺a12a21⫽0.
⫽(a11⫺l)(a22⫺l)⫺a12a21 det (A⫺lI)⫽2a11⫺l a12
a21 a22⫺l2
det (A⫺lI) A⫺lI
a21x1 ⫹(a22⫺l)x2 ⫽0.
(a11⫺l)x1⫹ a12x2 ⫽0 (14*)
c
a11⫺l a12a21 a22⫺l
d c
x1x2
d
⫽c
00
d
n⫽2 A⫺lI
x⫽0
x1,Á, xn (A⫺lI)x⫽0
Ax⫺lx⫽0 l x⫽0
l x⫽0
l l
Ax⫽lx
⫻ A⫽[ajk]
v(1)⫽ ⫺ 1
c1 (c2v(2)⫹ Á ⫹crv(r)).
c1⫽0 c1,Á
, cr
SEC. 4.0 For Reference: Basics of Matrices and Vectors 129
This quadratic equation in is called the characteristic equation of A. Its solutions are the eigenvalues of A. First determine these. Then use with to determine an eigenvector of Acorresponding to . Finally use with to find an eigenvector of Acorresponding to . Note that if xis an eigenvector of A, so is kxwith any .
E X A M P L E 1 Eigenvalue Problem
Find the eigenvalues and eigenvectors of the matrix (16)
Solution. The characteristic equation is the quadratic equation
. It has the solutions . These are the eigenvalues of A.
Eigenvectors are obtained from . For we have from
A solution of the first equation is . This also satisfies the second equation. (Why?) Hence an eigenvector of Acorresponding to is
(17) . Similarly,
is an eigenvector of Acorresponding to , as obtained from with . Verify this.