Z'=Y'C=AYC=AZ
V. Example
The independence of these real solutions follows from the fact that the original solutions
= (i =
1, . .. ,2p + q) are linearly independent by Theorem II and can be represented as linear combinations of the above real solutions; cf. 15.II.(c).§17. Systems with Constant Coefficients 179
which, combined with the solution
1
y3(t) 0 et, 2
constitute a real fundamental system.
VI.
Linear Transformations. We consider the
obtained above from a somewhat different point of view. If C is a nonsingular constant matrix, then the mappingy =
Cz, zC'y
(det C 0) (6)transforms a solution y(t) of (1) into a solution z(t) of the system
z' Bz, with B =
C'AC,
(7)and conversely.
Suppose now that A has n linearly independent eigenvectors c1, one sets
C
(ci,.
.., thenAC = = = CD,
where
D = diag .. . , (i.e., d23 =0 otherwise), is a diagonal matrix. Thus for this choice of C,
B=C'AC=D,
and then (7) reads simply
z'1 =
=
It is easy to find a fundamental system of solutions for this system, namely
0
...
00
...
0Z(t) =
(zi,. ..,: : :
(8)
0 0
...
Going back to y =Cz,we obtain the fundamental system of Theorem II,
Y =CZ = (Czi, .. , =(cieAht,.. , (9)
Summary. In the case where there are n distinct eigenvalues and, more gen- erally, in the case of n linearly independent eigenvectors there is a fundamental system of solutions of the form (2) (the simplest example, A =
I
with eigen- vectors e1, ..., shows that it is also possible to have n linearly independent eigenvectors in the case of multiple zeros of the characteristic polynomial).VII. Jordan Normal Form of a Matrix.
In order to handle the gen- eral case, we make use of a result from matrix theory without proof. It says that for every real or complex matrix A there exists a nonsingular matrix C (in general, C will be complex), such that BC'AC has the so-called Jordan
normal formB=
, (10)where the Jordan block is a square matrix of the form
1 0 0
0 1 0 0
0 0 )tj 1
...
0Ji=
... ... (11)0 0 1 0
0 0 0 1
0 0 0
with rows and coluxnxis; outside of the Jordan blocks, B consists entirely of zeros. Here r1+ + rk =n, and
= —
Ày' ...
(A—Note that the main diagonal of B consists of eigenvalues of A and that each block is made up of one and the same eigenvalue. However, the same eigenvalue can appear in more than one block; for example, the matrix I is in Jordan normal form (k =n, = 1, A. = 1).
§17. Systems with Constant Coefficients 181
The system corresponding to a Jordan block J with r rows and diagonal element A is given by
=Axi+x2
=Ax2+x3
and can be easily solved (one begins with the last equation). The matrix
is a fundamental matrix for equation (12) with X(O) =I.
Proceeding in this way, a fundamental matrix Z(t) for equation (7) can be constructed if B is a Jordan matrix; one has simply to insert the corresponding solution (13) into each Jordan block. For example, if
x' =
Jx, or=AXr_i + Xr
= AXr
0 0
(12)
X(t)=
(13)teAt eAt teAt
0 eAt
0
1 At
(r—2)' e
1 #r—3 At
(r—3)! e
0
/
/
then the corresponding fundamental matrix Z(t) with Z(O) =
I
readso
o 0
0
/
Thus, if B C—1AC has Jordan normal form, then each column of Z(t) is a solution of (7) of the form
z(t) =
where A is an eigenvalue of A (note that u(A) = a(B)). Consequently, y =Cz is a solution of equation (1) of the form
y(t) =
pm(t)eAt with pm(t) = . ..where is a polynomial of degree ç m.
VIII. Summary. For every k-fold zero A of the characteristic polyno- mial there exist k linearly independent solutions
= (14)
in which every component of
(m=0,1,...,k—1)
is a polynomial of degree m. If carried out for every eigenvalue, this con- struction leads to n solutions, which form a fundamental system.
If A is real, then a real f'andamental system is obtained by taking, in case A is nonreal, two real solutions
=
v2 = from each of the k solutions of (14) and ignoring the corresponding k solutions for the complex conjugate eigenvalue A.§17. Systems with Constant Coefficients 183 The degree of the polynomials that arise can be determined from the Jordan normal form. In the previous example, where B is a Jordan matrix with n = 6, there is a solution y = with degree p = 2, but no solution with higher
degree, and this is true even if A = = ii. If a
solution y = with degree p 1, but no solution with degree p = 2, etc.
The following terminology is useful here.
Algebraic and Geometric Multiplicity.
If A is a k-fold zero of the characteristic polynomial of A, then the number m(A) := kis called the algebraic multiplicity of the eigenvalue, and the dimension m' (A) of the corresponding eigenspace, that is, the maximal number of its linearly independent eigenvectors, is called the geometric multiplicity. Here 1 < m'(A)m(A) n.
If m(A) = m'(A), theeigenvalue is called semisimple. In this case, the number A appears rn(A) times in the main diagonal of the Jordan normal form, but there is no 1 in the superdiagonal, and in the corresponding m(A) solutions (14) the pm(t) are constant polynomials (namely the eigenvectors). If this is true of all eigenvalues, then the Jordan matrix corresponding to A is a diagonal matrix, and the matrix A is said to be diagonalizable.The calculation of the solutions is easily accomplished once the Jordan nor- mal form B =
C'AC
and the transformation matrix C have been determined.However, the k =m(A) solutions belonging to the eigenvalue A can also be ob- tained, a step at a time, by first determining the corresponding eigenvectors c that lead to the solutions y =ce)st. Then, one after another, the ansätze (a, b,
y =
(a + ct)eAt,y =
(a+ bt +
are applied until m(A) solutions have been found. By equating coefficients of like terms, one sees that the coefficient c of highest power of t is always an eigenvector.
IX. Example. n =2, y(t) = (x(t),y(t))T, x'—x—y
A=(
—1y'=4x—3y
—3From
det(A—AI)—-A2+2A+1
it follows that A —1 with algebraic multiplicity m(A) = 2and (2 —1
A-AI=A+I=I
—2
The corresponding homogeneous system (3') hasonly one linearly independent solution,
c=I
(1Thus we have m'(A) =1. The corresponding solution is faA
I i\
I 1=1
Iet.
\2)
A second, linearly independent, solution can be obtained using the ansatz
(x\
(a + bt\\yJ \c+dt)
We have that
(x'\ (b—a—bt\ fa+bt\
I 1=1
let=AI let
\y')
holdsif and only if
(b\ (b\ fa\
(b—aAl 1=—I
I andAl 1=1
\dJ \dJ \c) \d—c
The first equation has the eigenvector c as a solution, i.e., b = 1 , d= 2. The second equation is satisfied, for example, if a = 0, c = —1. The corresponding solution
\y) \-1+2tJ
is linearly independent from the first solution.
X. Real
Systems for n
= 2. We consider the real system for y = (x,y)T(x)'
= A(x), A = (a11 a12)
(15)
y y a21 a22
under the assumption D = detA 0. This implies that A = 0
is not an
eigenvalue. The corresponding characteristic polynomialP(A)=det(A—AI)=A2—SA+D
withS=trA=a11+a22
has zeros
§ 17. Systems with Constant Coefficients 185
Real Normal Forms. Our first goal is to show that every real system (15) can be reduced by means of a real affine transformation (6), (7) to one of the following normal forms:
IA o\ IA
i\ 1aw
R(A,1i)
),
Ra(A) =OA ),
K(a,uj) = —WaHere A, a, w are real numbers with ji 0 and W > 0. If > 4D, we
have the real case (R).
If S2 <
4D, the complex case (K) occurs, while if S2 = 4D, the case (R) or (Ra) occurs depending on whether A = has two linearly independent eigenvectors. We construct now the affine transformation C.Case (R). There are two (real) eigenvectors c, d with Ac Ac, Ad 'id.
If C
(c, d), then C'AC =
R(A,/h); cf. VI.Case (Ra). We have A = and only one eigenvector c. However, as is shown in linear algebra, there is a vector d linearly independent of c such that
(A —AI)d=c. The matrix C = (c,
d) again satisfies C'AC =
Ra(A).Case (K). = A;hence Ac = Ac and Aë= The matrix (a, e) transforms the system to the normal form B diag (A, A). However, we want to find a real normal form. This can be obtained as follows.
Let c = a+ ib, A = a + iw with W> 0. Separating the equation Ac = Ac
into real and imaginary parts leads to
Aa=aa—Wb ) / a
w= (Aa,Ab) = (a,b)
Ab=ab+wa)
aSince a, ë are linearly independent and can be representedin terms of a, b, it follows that a and b are also linearly independent; i.e., the matrix C =(a,b) is regular and transforms the system to the real form K(a, w).
I
We investigate now each of these cases and construct a phase portrait of the differential equation, from which the global behavior of the solutions can be seen. If equations (1) and (7) are coupled by the transformation (6), then their phase portraits are also coupled by the same affine mapping y = Cz of
which transforms straight lines into straight lines, circles into effipses,
...,
butpreserves the characteristic features such as the behavior as t —+oo. In this way we obtain an insight into the global properties of all systems with det A 0.
(a) A =
R(A,ji) withA ji
< 0. The solutions of the system x' = Ax,=jiy are given by (x(t),y(t)) = (ae)*t, beiLt) (a, b real), their trajectories by
= (a,b 0 with
x/a, y/b> 0).
The special cases a = 0 or b = 0 are simple. All solutions tend to 0 as t oo. In the case A=ji, thetrajectories are half-lines; in the general case, corresponding power curves. The origin is called a (stable) node.
(b) A = Ra(A) with A <0. From x' = Ax + y, y' = y, one obtains
x(t) = + y(t) =
Stable nodes. A= R(A,A) with A <0 (left) and A =R(A,ji) with A <0,
=2 (right)
Stable node for A = with A <0
For a =
0 (this means that (x(O),y(O)) = (0,b)), we have x = ty and At = log(y/b). Thus the trajectories are given byAx = ylog for b 0 (with > 0).
The positive and negative x-axis are also trajactories. Here, too, all solutions tend to the origin as t —+ oo, which is again called a (stable) node.
(c) A = R(A,p) with A < 0 < p.. The solutions and their trajectories are determined formally as in (a). However, the phase portrait has a completely different appearance. There are only two trajectories that point toward the origin (b = 0). All of the other solutions (with (a, b) 0) tend to infinity;
x2(t) + y2(t) —+ oo as t —' oo. The origin is called a saddle point.