• Tidak ada hasil yang ditemukan

Differential Equations for Complex-Valued Functions of a Real Variable. We first extend the notion of an initial value problem by allowing

Dalam dokumen Ordinary Differential Equations (Halaman 155-161)

Supplement II: Differential Equations in the Sense of Carat héodory

II. Differential Equations for Complex-Valued Functions of a Real Variable. We first extend the notion of an initial value problem by allowing

the functions that appear to be complex-valued. The independent variable x, however, remains a real variable as before. Since C and JR2 (and likewise C'1 and R2'1) are equivalent as sets, as metric spaces, and with respect to the additive structure, we can represent the complex-valued function y: J —p Cas a pair of real functions

y(x) = (u(x),v(x)) = u(x)

+iv(x)

with u =Rey, v = hay.

Likewise, we write y =(u,v) for yE C'1 (u,v ER'1). Then f(x,y) : J x C'1 can be written in the form

f(x,y) =

(g(x,u,v),h(x,u,v)),

and hence the "real—complex" system of n differential equations

y' =

f(x,y) (1)

is equivalent to a real system of 2n differential equations u'==g(x,u,v),

v' =h(x,u,v).

(2)

Further, continuity, respectively Lipschitz continuity, with respect to y for f is equivalent to continuity, respectively Lipschitz continuity, with respect to (u, v) for g and h.

It follows that the earlier theorems for real systems remain valid for systems with complex-valued functions. This statement can also be verified directly, since the earlier proofs remain valid without changes if C(J) is understood to be a Banach space of complex-valued functions.

§12. Continuous Dependence of Solutions 143

Example. In the differential equation

y' =Ay+g(x)

let A = + iv and g(x) =

h(x) + ik(x) E C(R). The equivalent real system is

= — vv+ h(x),

V1 —vu+/1v+k(x).

As in the real case, the general solution is

y(x; C) =

+ f

dt (C E C!).

The proof is left as an exercise for the reader.

It is frequently more convenient to work with (1) instead of (2) from a practical point of view (cf. the above example). There is also an important theoretical reason for preferring (1). In the example given above, the right- hand side of the differential equation is a holornorphic function of the parameter A E C. This property arises frequently, and an important theorem says that if the right-hand side is holomorphic in A, then the solutions are also holomorphic in A (this is evident in the example). This theorem is proved in the next section.

It is needed in a later chapter in the investigation of eigenvalue problems, among others.

Notice. The theorems in §12 are true for systems where the right-hand sides and solutions or approximate solutions are real-valued, as well as for those where they are complex-valued; in both cases, however, the independent variable x is always real.

The following estimation theorem, Theorem III, deals with the initial value problem

y'=f(x,y) inJ,

(3)

It gives an estimate for the difference z(x) — y(x), where y(x) is a solution to (3) and z(x) is an "approximate solution." The two quantities

z(e)—ii

and Pz=z'—f(x,z)

(defect)

are used to measure of how "good" z(x) is as an approximation to y(x). The- orem III establishes a bound p(x) forthe difference Iz(x) — y(x) I that depends on a bound on the initial deviation (a), and a bound on the defect (b), and, most important, a condition (d) on f that includes the Lipschitz condition as a special case.

III. Estimation Theorem.

Let the vector functions y(x), z(x) and the real-valued function p(x) be defined and differentiable in the interval J : x + a. Let the real-valued functions 8(x) and z) be defined in J and J x IR, resp., and suppose the following conditions are satisfied:

(a)

(b)

y'=f(x,y), Iz—f(x,z)IS(x)

inJ,

(c) p' > 5(x)+ p(x)) in J, and (d)

f(x,y)

—f(x,z)I w(x,Iy—zI) inJ.

Then

!z(x)—y(x)I<p(x)

in J.

If w(x, y) is continuous and locally Lipschitz continuous in y, then the theorem holds with in all places. Naturally, we assume that f is defined in a set D containing graph y and graph z.

In the proof, we need the following Lemma.

IV. Lemma. If the vector function g(x) is differentiable at x0, then the scalar function q5(x) = g(x)I satisfies

< ig'(xo)i and

As a matter of fact, the one-sided derivatives and exist at xo; cf.

B.W.

The proof proceeds by passing to the limit as h —+ 0+ in the inequality (here h> 0),

h)g(xog(xo h)

h h h

and similarly for the second inequality.

I

Proof of the Estimation Theorem III. We apply Theorem 9.111 with =

Iz(x) —

=

p(x), and w instead of f. By hypothesis (a), the require- ment is satisfied. Since Pp = p'w(x,p) > 5(x) by (c), it remains to prove that 5(x). Indeed, it follows from the lemma and assumptions

(b), (d) that

Dçb(x) Iz'(x)

— y'(x)j

= Iz'—f(x,z)+f(x,z)—f(x,y)I 5(x) +w(x,Iz —yl) = 5(x)

In the theorem with we apply Theorem 9.VIII. U

Themost important special case is the following

Continuous Dependence of Solutions 145

V.

Lipschitz Condition. Theorem.

1ff satisfies a Lipschitz condition in D,

If(x,yi) —f(x,y2)I Y21, (4)

and if y(x) is a solution and z(x) an approximate solution to the initial value problem (3) in J such that

Iz'(x)—f(x,z)I (5)

6 are constants), then the estimate

Iy(x) — z(x)I + — 1) (6)

holds in J. Here J is an arbitrary interval with E J.

In the estimation theorem, we set w(x, z) = Lz and use the second version.

All four assumptions are satisfied if p(x) is the solution of

p'=S+Lp in

J,

This leads to the bound in (6) for x > Thecase x <e canbe reduced to the estimation theorem in exactly the same way by a reflection about the point

U

If 6 = = 0in (5), then the estimate (6) implies that y(x) =z(x). Thus (6) contains the uniqueness result proved earlier in the case where the right-hand side satisfies a Lipschitz condition. However, it also includes significantly more, namely a

VI.

Theorem on Continuous Dependence.

Let J be a compact in- terval with E J and let the function y =yo(x) be a solution of the initial value problem

y'=f(x,y)

in J, (3)

The a-neighborhood (a> 0) of graph Yo ?definition: the set of all points (x, y) with x e J, yo(x)I a) will be denoted by Suppose there exists a > 0

such that f(x, y) is continuous and satisfies the Lipschitz condition (4) in Then the solution yo(x) depends continuously on the initial values and on the right-hand side 1. In other words: For every e > 0, there exists 6> 0 such that if g is continuous in Sa and the inequalities

in Sa, (7)

are satisfied, then every solution z(x) of the "perturbed" initial value problem

z'=g(x,z),

(8)

exists in all of J and satisfies the inequality

z(x)— yo(x)I in J• (9)

Proof. Let z(x) satisfy (7) and (8). As long as the curve z(x) remains in (5) is satisfied for yo(x) and z(x) with 'y = 5. Thus (6) holds with

=

5. If

=

Sis chosen suficiently small in (6), then it is easy to see that the right side of (6) is a/2. As long as z(x) remains in Sa, i.e., as long as Iyo(x) —z(x)I <c the estimate (6) holds and hence, as a matter of fact, Iyo (x) —z(x)I c a/2. From here one sees immediately that the curve z(x) cannot leave the neighborhood Thus the estimate (6) with =5 holds in all of J. Therefore condition (9) is easily satisfied (for an arbitrarily given e> 0); one has simply to take =S so small that the right-hand side of (6) is <e.

Remarks. 1. The theorem applies, in particular, if D is open and f and Of/c9y are in C° (D). Indeed, if Yo is a solution on a compact interval J, then there exists an a> 0 such that C D, and furthermore, a Lipschitz condition holds because of the continuity of the derivatives of f.

The theorem applies to the case where IPzI = — f(x,z)J <S because z is a solution of (8) with g(x,y) :=f(x,y) + (Pz)(x).

Supplement: General Uniqueness and

Dependence

Theorems

The Lipschitz condition in Theorem VI can be replaced with a significantly weaker uniqueness condition.

VII. Dependence and Uniqueness Theorem.

Let the real-valued func- tion w(x, z) be defined for x E J :=

+ a] (a >

0),

z 0, and have the

property:

(U) For every e >0, there exist S > 0 and a function p(x) such that

p'>S+w(x,p) and 0<S<p(x)<E in

J.

1ff satisfies the estimate

If(x,yi) —f(x,y2)I —Y21) (10)

inD C J x R'2, respectively J x then the initial value problem (3) has at most one solution. The solution depends continuously on and f in the sense described in Theorem VI.

Proof. If y is a solution of (3) and e> 0 is given, we determine p(x) and S according to (U). If z satisfies (7), (8), then it follows immediately from Theorem

III that y(x) —z(x)I

<p(x) <r. I

VTII. Examples of Well-Posedness. The theorem states in particular

that

is continuous and the function w satisfies (U), then an estimate of the form (10) gives a condition for well-posedness of problem (3). Some examples of functions that satisfy (U) are:

(a) The Lipschitz Condition (R. Lipschitz 1876): w(x, z) = Lz.

§12. Continuous Dependence of Solutions 147 (b) Osgood's Condition (1898): w(x, z) = q(z), where q E C[0, oo), q(O) 0,

q(z) >0 for z> 0, and

f'

dz

I

—=00.

q(z)

(c) Bompiani's Condition (1925): Let the function w(x, z) be continuous and 0 for x E J, z 0. Let 0) =0 and suppose the following condition is satisfied:

If q5(x) 0 is a solution of the initial value problem

in J1 := +a), = 0,

then =0 in J1.

(d) Krasnosel 'skii—Krein Condition (1956):

for

with 0<a<1,0<k(1—a)<1,C>0.

Example (a) is clearly a special case of (b). By Theorem 1 .VIII, (b) is a special case of (c). To show that a function that satisfies (c) also satisfies (U), we modify w for z 1 by setting w(x,z) = w(x,1)

for z 1. Then w is

bounded. Let be a solution of the initial value problem

I

1.

1

111 J,

Since w is bounded, exists in all J. By Theorem 9.111, the sequence (Pn) is monotone decreasing. Therefore = lim exists. Furthermore, the sequence (pn) is equicontinuous, and hence the convergence is uniform (follows from the boundedness of and 7.111). Representing the initial value problem for p,-, as an integral equation and taking the limit as n oogives

=

f

w(t, dt.

From (c) it follows that = 0 in every interval J1 = + a] in which q5 < 1 (note that was modified, but only for z 1). It follows easily that

=

0 in J. Thus (pn) converges uniformly to 0 in J. Therefore, for every e > 0, there exists S = 1/(2n) and p = such that (U) is satisfied.

Example (d) is also a special case of (c). However, the verification is some- what more difficult; cf. Walter (1970; p. 108).

Remark. More general uniqueness conditions of Nagumo, Kamke, and others can be found in the literature. Their importance, however, is limited by the fact, proved first by Olech (1960), that a continuous function f that satisfies such a general condition also satisfies the condition in Theorem VII. References to the literature and historical remarks are contained in Walter (1970; see, in particular, §14).

§

13. Dependence of Solutions on Initial Values and Parameters

In this section the problem of dependence of the solution of an initial value problem on the data is investigated further. At the same time, the structure of the problem is generalised in two directions. First, we consider the case where the right side of the differential equation depends on a parameter A; that is,

f =

f(x,y; A). Second, we consider

Dalam dokumen Ordinary Differential Equations (Halaman 155-161)