Z'=Y'C=AYC=AZ
V. Example. The system
16. Inhomogeneous Systems
As earlier, A(t), b(t) are defined and continuous in an interval J (real- or complex-valued).
The following theorem gives the relationship between solutions of the inho- mogeneous differential equation
y' =
A(t)y+ b(t) (1)and solutions of the corresponding homogeneous differential equation.
I.
Theorem.
Let be a fixed solution of the inhomogeneous equation (1). If x(t) is an arbitrary solution of the homogeneous equation, theny(t) =
+ x(t)is a solution of the inhomogeneous equation, and all solutions of the inhomoge- neous equation are obtained in this way.
As in the case n = 1, the proof rests on the simple fact that the difference between two solutions of the inhomogeneous differential equation is a solution
of the homogeneous equation. U
Thus our task is to find just one solution of the inhomogeneous equation.
We make use of a procedure that originated with Lagrange, the
§16. Inhomogeneous Systems 171
II. Method of Variation of Constants.
If Y(t) is a fundamental ma- trix to the homogeneous differential equation, then by 15.11. (e), every solution of the homogeneous equation can be represented in the form Y(t)v, where v runs through all (constant) vectors. In the method of variation of constants the constants (vi,.. . , are "varied," i.e., replaced by functions of t,z(t) = Y(t)v(t).
The function v(t) is to be determined such that z(t) is a solution of the inho- mogeneous differential equation (1). Substituting z(t) into (1) gives
z' =
Y'v+ Yv' =
AYv+ Yv' =AYv + b, which leads to the conditionY(t)v' =
b(t). (2)Since Y is a fundamental matrix, the Wronskian det Y is 0. Therefore, the inverse matrix exists and is continuous in J. Multiplying equation (2) on the left by this matrix and integrating gives
v(t) =
v(T) +f
ds.For instance, the solution z(t) with initial value z(T) = 0 is given by
z(t) = Y(t) ds. (3)
III. Theorem.
The initial value problem (A(t), b(t) e C(J), T EJ)
= A(t)y + b(t),
y(r) =
has the (uniquely determined) solution y(t)
X(t) is the fundamental matrix of the homogeneous differential equation with X(r) =
I.
Proof. The first summand on the right side is a solution of the homogeneous equation with the initial value rj, and the second summand is a solution of the inhomogeneous equation with initial value 0 (see (3)).
I
Remark. If Y(t) is a fundamental matrix, then by (15.5) Y(t) = X(t)Y(r), whence Y—'(t) = and it follows that
=
Thus a representation of the solution y to the initial value problem in terms of Y(t) is given by
y(t) =
+ (4')Example.
, 1
/
t\
1 2 b(t) = ( ) (t> 0).
"
/
The general solution to the corresponding homogeneous equation was found in 15.V. Using the well-known formula for the inverse of a 2 x 2-matrix,
(a b\
1Id —b\
B=I
II,
d) a)
and (15.13), one can easily calculate Y1(t). The resulting matrix is
1
(t(1+lnt) t2lnt
tHence
1
(lnt+1—t2lnt\
Y—1(t)b(t) =
)'
p2 1
(t2—1+(4—2t2+2lnt)lnt\
I
Y'(s)b(s) ds =
— II,
ii
4lnt—2t2+2)
and therefore from (3),
t 1 1t2(t2_1+2int_21n2t)
z(t) = Y(t)
I Y'(s)b(s)
ds = —Ji
Thus we have found a particular solution of the inhomogeneous differential equation with initial value z(1) = 0.
IV. Exercise. Show that the real linear system
=a(t)x —b(t)y,
=b(t)x+ a(t)y
can be reduced to a single complex linear differential equation
=c(t)z
for z(t) = x(t)+iy(t). Derive a linear differential equation for v(t) = z(t)2(t) = x2(t) + y2(t).
Use this method to solve the system
x'=xcost—ysint,
-y'
=xsint+ycost.
§16. Irthomogeneous Systems 173
In particular, determine a fundamental system X(t) with X(0) =
I
and compute its Wronskian det X(t). Show that every solution is periodic. What is the period?Sketch the orbit z(t)
(x(t), y(t)) of the solution with initial values (x(0),y(0)) = (1,0) in the xy-plane. Determine v(t) = Iz(t)12 and find twobounds 0 <a <v(t)
/3 for this solution.V. Exercise. Determine the general solution of the system
= (3t—1)x— (1 _t)y+tet2,
=—(t+ 2)x + (t —2)y —et2.
Hint. The homogeneous system has a solution of the form (x(t), y(t)) =
Supplement: L'-Estimation of C-Solutions
We consider solutions in the sense of Carathéodory of the problem
y'=A(t)y+b(t)
inJ=[r,T+a}, y(r)=q
(5)under the assumption that (all components of) A(t) and b(t) belong to L(J).
According to Theorem 10.XII, there exists a unique C-solution in J, and it is not difficult to show that the earlier results, in particular Theorems 15.1 and 15.111 for the homogeneous system and the representation formula (4) for the solution of problem (5), are also valid under these assumptions.
Our aim now is to establish pointwise estimates on y(t) and on the difference y(t) —z(t), where z(t) satisfies
z'=B(t)z+c(t)
in J, (6)in terms of integral estimates of the given functions. Note that in the cor- responding Theorem 14.V1 pointwise bounds (and not L' bounds) on these functions are required.
VI. Estimation Theorem. Assume that A, B, b, c belong to L(J) and that IA(t)I, IB(t)I h(t) E L(J). Then the solutions y(t) of (5) and z(t) of (6) satisfy
Ii,I
+ ds, (7)where H(t) = f.rt h(s)ds and
ly(t) — (8)
- CI + f -
c(s)I + A(s)-
B(s)I Iy(s)I} ds.For the maximum norm = maxj If(t)I and the L'-norm
IlL' =
fr+alf(t) Idt, the estimatesC(j11I+IIbIIL1), C=exp(IIhIlL,), (9)
lIb—CIIL1 +lIA—BIIL1) (10)
hold, where C1 depends only on IIhIlLl and IlbilLi.
Proof By 10.XVI,
Iy(t)I' ly'(t)l
= IA(t)y+b(t)I + lb(t)I.Hence
=
satisfies j(T)= whichleadsto (7) after integration.
The difference u = z —y satisfies
u'=Bz+c—Ay—b=Bu+(c—b)+(B—A)y.
The estimate (7), applied to u (with — instead of B instead of A, and
(c —b) + (B —A)yinstead of b) gives (8).
I
Our next theorem deals with the linear case of the comparison theorem 10.XII in the context of C-solutions.
VII. Positivity Theorem.
Assume that the real matrix A(t) E L(J),J = [r,r
+ a], is essentially positive; i.e.,0 for i j.
Then, forU E AC(J),
u' A(t)u a.e. in J, u(T) 0 implies u(t) 0 in J.
Moreover, if
>0, then u2(t) >0 fort> t1.
Proof Let IA(t)I h(t), where
is the maximum norm, and H(t) =h(s)ds. Then B(t) = A(t)+ h(t)I 0, i.e., b33 0 for all i, j, and IB(t)I 2h(t). The function w(t) = satisfies w'(t) B(t)w, and the function o• = (p,p,. . .,p) with p(t) = also satisfies B(t)o; both inequalities are easily established. Hence WE W + satisfies BwE and wE(0) > 0.
As long as wE(t) 0, we have 0, and this shows that wE(t) is increasing and positive in J. Since e is arbitrary, w(t) is increasing, and both propositions
about u(t) are obtained as a result.
I
This theorem can be used to give an alternative proof for the comparison theorem 10.XII that is valid for C-solutions. As before, Pu = u' —f(t,u) is the
defect of u.
§17. Systems with Constant Coefficients 175
VIII. Comparison Theorem.
Suppose f(t, y)J x
is quasi- monotone increasing in y and satisfies a Lipschitz condition in the maximum norm with h(t) E L(J),If(t,y) —f(t,z)I h(t)Iy—zI for y,z ERTh.
Then, for v, w E AC(J),
v(r) <w(r) and Pv Pw a.e. in J implies v w in J.
If v2(t1) <w2(ti) for an index i and t1 J, then v.j <w2
fort > t1.
Proof as an Exercise. Hint: Show that for y,z E R?t, the difference f(t, y)—
f(t, z) can be written in the form A(y — x) where A is essentially positive and boundedinnorm by h(t), and apply VII(usea decomposition of thef-difference as given for n=2 by 9(y1,y2)—g(z1,z2) = —g(zl,y2)]+[g(zl,y2) — g(z1,z2)]).
§
17. Systems with Constant Coefficients
I.
The Exponential Ansatz.
Eigenvalues and Eigenvectors. In this section suppose A = (a23) in the homogeneous linear systemy'=Ay
(1)is a constant complex matrix. Solutions can be obtained using the ansatz
c eAt = , (2)
where A, Cj are complex constants. Substituting y = c eAt into equation (1) leads to
y' =
AceXt=i.e., y(t) is a solution of (1) if and if
Ac =Ac. (3)
A vector c 0 that satisfies equation (3) is called an eigenvector of the matrix A; the number A is called the eigenvalue ofA corresponding toc.
We recall a couple of facts from linear algebra. Equation (3), or what amounts to the same thing,
(A — AI)c=0, (3')
is a linear homogeneous system of equations for c. This system has a nontrivial solution if and only if
a11—A a12
a12 a22—A •..
det(A
=
: : . = (4)in other words, the eigenvalues of A are the zeros of the polynomial
=det(A— Al), (5)
called the characteristic polynomial. This polynomial is of degree n, as one can see, for instance, from the definition (14.1) of a determinant. Thus it has n (real or complex) zeros, where each zero is counted according to its multiplicity.
An eigenvector c 0 corresponding to a zero A (an eigenvector is 0 by definition) is obtained by solving the system (3'). It is determined only up to a multiplicative constant. The set o(A) of eigenvalues is called the spectrum of A.
II. Theorem (Complex Case).
The function (A, c, A complex, c 0)y(t) =
c.is a solution of equation (1) if and only if A is an eigenvalue of the matrix A and c is a corresponding eigenvector.
The solutions
y2(t) = (i = 1,.. .,p)
are linearly independent if and only if the vectors c2 are linearly independent.
In particular, they are linearly independent if all eigenvalues A1, ...,
A has n linearly independent eigenvectors (this is the case, for ex- ample, if A has n distinct eigenvalues), then the system obtained in this manner is a fundamental system of solutions.
Proof. By the isomorphism statement proved in Theorem 15.1, the solutions are linearly independent if and only if their initial values
=
are lin- early independent. The statement that p eigenvectors corresponding to distinct eigenvalues are linearly independent is certainly true for p = 1. It is proved in general by establishing the inductive step from p to p + 1. If the eigenvectorsc1,. .. , c an additional eigenvector corre-
sponding to the eigenvalue A and A (here and in the following equations i runs through the numbers 1 to p), then, as we will now show, a representation of the form
=
§ 17. Systems with Constant Coefficients 177
is not possible. By applying A to both sides. one would obtain Ac =
andbecause such representations are unique, Aa2 =Ajaj, i.e., =0.
I III. Real
Systems. Obviously, the theorem also holds for real systems.In this case, however, one is interested in real solutions. Here one runs into the difficulty that a real matrix may have complex eigenvalues, which lead in turn to complex solutions y(t). Now, it is immediately obvious that for real A both the real part and the imaginary part of a complex solution are real solutions to (1). Thus from a complex eigenvalue one obtains two real solutions. Note, however, that if the complex quantities A and c satisfy equation (3), then their complex conjugates A and ë do also. Therefore, A andë are also an eigenvalue and eigenvector of A and lead to a solution = which is the complex conjugate to y = c The decomposition of the complex conjugate solution into real and imaginary parts leads to exactly the same two real solutions.
1V.
Theorem (Real Case).
If A = +iv (i/ 0) is a complex eigen- value of the real matrix A and c =a+ ib is a corresponding eigenvector, then the complex solution y ceAt produces two real solutions:u(t) = Rey=eMt{acosvt —bsinvt},
v(t)= Imy= e1Lt{asinvt+bcosvt}.
Suppose there are 2p distinct, nonreal eigenvalues
A1,..., = Li,.. , =
andq distinct real eigenvalues (i 2p + 1,. ..,2p+ q). If for the 2p distinct, nonreal eigenvalues one constructs 2p real solutions
(i=1,...,p)
in the manner described above and q real solutions y2 corresponding to the q distinct real eigenvalues using (2), then the resulting 2p + q solutions are linearly independent.
A corresponding result also holds if some of the A2 are equal, i.e., if there are multiple eigenvalues. If the 2p + q corresponding eigenvectors are complex linearly independent, then the same is true for the corresponding 2p + q solu- tions of the form (2), and it remains true for the 2p + q real solutions obtained after splitting into real and imaginary parts. In particular, if A has n linearly independent eigenvectors, then one obtains a real fundamental system.
The independence of these real solutions follows from the fact that the original solutions