• Tidak ada hasil yang ditemukan

getdoc5b89. 274KB Jun 04 2011 12:04:25 AM

N/A
N/A
Protected

Academic year: 2017

Membagikan "getdoc5b89. 274KB Jun 04 2011 12:04:25 AM"

Copied!
27
0
0

Teks penuh

(1)

El e c t ro n ic

Jo ur

n a l o

f P

r o

b a b il i t y

Vol. 14 (2009), Paper no. 69, pages 2011–2037. Journal URL

http://www.math.washington.edu/~ejpecp/

Wiener Process with Reflection in Non-Smooth Narrow

Tubes

Konstantinos Spiliopoulos

Division of Applied Mathematics

Brown University 182 George Street Providence, RI 02912, USA

[email protected]

Abstract

Wiener process with instantaneous reflection in narrow tubes of widthε1 around axis x is considered in this paper. The tube is assumed to be (asymptotically) non-smooth in the follow-ing sense. Let(x)be the volume of the cross-section of the tube. We assume that 1ε(x)

converges in an appropriate sense to a non-smooth function asε0. This limiting function can be composed by smooth functions, step functions and also the Dirac delta distribution. Under this assumption we prove that the x-component of the Wiener process converges weakly to a Markov process that behaves like a standard diffusion process away from the points of disconti-nuity and has to satisfy certain gluing conditions at the points of discontidisconti-nuity.

Key words: Narrow Tubes, Wiener Process, Reflection, Non-smooth Boundary, Gluing Condi-tions, Delay.

AMS 2000 Subject Classification:Primary 60J60, 60J99, 37A50.

(2)

1

Introduction

For each x ∈R and 0 < ε <<1, let x be a bounded interval inR that contains 0. To be more specific, let x = [Vl,ε(x),Vu,ε(x)], whereVl,ε(x),Vu,ε(x) are sufficiently smooth, nonnegative functions, where at least one of the two is a strictly positive function. Consider the state space = {(x,y): x ∈R,yx} ⊂R2. Assume that the boundary∂Dε of is smooth enough and denote byγε(x,y)the inward unit normal to∂Dε. Assume thatγε(x,y)is not parallel to thex-axis.

Denote by (x) = Vl,ε(x) +Vu,ε(x) the length of the cross-section x of the stripe. We assume that is a narrow stripe for 0< ε << 1, i.e. (x) 0 asε ↓ 0. In addition, we assume that

1

εV

ε(x)converges in an appropriate sense to a non-smooth function, V(x), asε

↓0. The limiting function can be composed for example by smooth functions, step functions and also the Dirac delta distribution. Next, we state the problem and we rigorously introduce the assumptions on(x)and V(x). At the end of this introduction we formulate the main result.

Consider the Wiener process(Xεt,Ytε)in with instantaneous normal reflection on the boundary of. Its trajectories can be described by the stochastic differential equations:

Xtε = x+Wt1+

Z t

0

γε1(Xsε,Ysε)d Lεs

Ytε = y+Wt2+

Z t

0

γε2(Xsε,Ysε)d Lsε. (1)

Here Wt1 andWt2 are independent Wiener processes in R and(x,y) is a point inside ; γε1 and

γε2 are both projections of the unit inward normal vector to∂Dε on the axis x and y respectively. Furthermore, t is the local time for the process (Xtε,Ytε) on ∂Dε, i.e. it is a continuous, non-decreasing process that increases only when(Xtε,Ytε)∂Dεsuch that the Lebesgue measureΛ{t>

0 :(Xtε,Ytε)∂Dε}=0 (eg. see[12]).

Our goal is to study the weak convergence of the xcomponent of the solution to (1) as ε ↓ 0. The y−component clearly converges to 0 asε↓0. The problem for narrow stripes with a smooth boundary was considered in[4]and in [5]. There, the authors consider the case 1ε(x) = V(x), whereV(x)is a smooth function. It is proven thatt converges to a standard diffusion processXt, asε↓0. More precisely, it is shown that for anyT >0

sup

0≤tT

Ex|t Xt|20 as ε→0, (2)

whereXt is the solution of the stochastic differential equation

Xt=x+Wt1+

Z t

0

1 2

Vx(Xs)

V(Xs)ds (3)

andVx(x) = d V(x)

d x .

(3)

points where the scale function is not differentiable (skew diffusion) and also points with positive speed measure (points with delay).

For anyε >0, we introduce the functions

(x):=

Z x

0

2 ε

(y)d y and v

ε(x):=

Z x

0

(y)

ε d y. (4)

Now, we are in position to describe the limiting behavior of 1 εV

ε(x).

1. We assume that Vl,ε,Vu,ε ∈ C3(R) for every fixed ε > 0 and that Vε(x)

↓ 0 as ε ↓ 0 (in particularV0(x) =0). Moreover, there exists a universal positive constantζsuch that

1

εV

ε(x)> ζ >0 for every x

∈Rand for everyε >0. (5)

2. We assume that the functions

u(x) := lim

ε↓0u

ε(x), x

∈R

v(x) := lim

ε↓0v

ε(x), x

∈R\ {0}, (6)

are well defined and the limiting functionu(x)is continuous and strictly increasing whereas the limiting function v(x)is right continuous and strictly increasing. In general, the function u(x)can have countable many points where it is not differentiable and the functionv(x)can have countable many points where it is not continuous or not differentiable. However, here we assume for brevity that the only non smoothness point isx =0. In other words, we assume that for xR\ {0}

V(x) = ∂V ε(x)

∂ ε |ε=0>0, (7)

and that the functionV(x)is smooth forx R\ {0}.

In addition, we assume that the first three derivatives ofVl,ε(x)and Vu,ε(x)(and in conse-quence of(x)as well) behave nicely for|x|>0 and forε small. In particular, we assume that for any connected subsetK ofRthat is away from an arbitrarily small neighborhood of x =0 and forεsufficiently small

|Vxl,ε(x)|+|Vx xl,ε(x)|+|Vx x xl,ε (x)|+|Vxu,ε(x)|+|Vx xu,ε(x)|+|Vx x xu,ε(x)| ≤C0ε (8)

uniformly inxK. Here,C0 is a constant.

After the proof of the main theorem (at the end of section 2), we mention the result for the case where there exist more than one non smoothness point.

3. Let (x)be a smooth function and let us define the quantity

ξε(gε):= sup |x|≤1

[|1

ε[g

ε

x(x)] 3

|+|x(x)gεx x(x)|+|εgεx x x(x)|]

We assume the following growth condition

(4)

Remark 1.1. Condition (9), i.e. ξε↓0, basically says that the behavior of Vl,ε(x)and Vu,ε(x)in the neighborhood of x=0can be at most equally bad as described byξε(·)forεsmall. This condition will be used in the proof of Lemma 2.4 in section 4. Lemma 2.4 is essential for the proof of our main result. In particular, it provides us with the estimate of the expectation of the time it takes for the solution to (1) to leave the neighborhood of the point0, asε↓0. At the present moment, we do not know if this condition can be improved and this is subject to further research.

In this paper we prove that under assumptions (5)-(9), thet component of the process (Xεt,Ytε) converges weakly to a one-dimensional strong Markov process, continuous with probability one. It behaves like a standard diffusion process away from 0 and has to satisfy a gluing condition at the point of discontinuity 0 asε↓0. More precisely, we prove the following Theorem:

Theorem 1.2. Let us assume that (5)-(9) hold. Let X be the solution to the martingale problem for

A={(f,L f):f ∈ D(A)} (10)

with

L f(x) =DvDuf(x) (11)

and

D(A) ={ f : f ∈ Cc(R), with fx,fx x∈ C(R\ {0}),

[u′(0+)]−1fx(0+)[u′(0)]−1fx(0) = [v(0+)v(0)]L f(0) and L f(0) = lim

x→0+L f(x) =xlim0L f(x)}. (12)

Then we have

X·ε−→X·weakly inC0T, for any T<∞, asε↓0, (13)

whereC0T is the space of continuous functions in[0,T].

ƒ

As proved in Feller[2]the martingale problem forA, (10), has a unique solution X. It is an asym-metric Markov process with delay at the point of discontinuity 0. In particular, the asymmetry is due to the possibility of havingu′(0+)6=u′(0) (see Lemma 2.5) whereas the delay is because of the possibility of havingv(0+)6=v(0)(see Lemma 2.4).

For the convenience of the reader, we briefly recall the Feller characterization of all one-dimensional Markov processes, that are continuous with probability one (for more details see[2]; also [13]). All one-dimensional strong Markov processes, that are continuous with probability one, can be characterized (under some minimal regularity conditions) by a generalized second order differential operator DvDuf with respect to two increasing functionsu(x) and v(x); u(x)is continuous, v(x)

is right continuous. In addition, Du, Dv are differentiation operators with respect tou(x)and v(x) respectively, which are defined as follows:

Duf(x)exists ifDuf(x+) =Duf(x), where the left derivative of f with respect touis defined as follows:

Duf(x−) =lim h↓0

f(xh)f(x)

(5)

The right derivativeDuf(x+)is defined similarly. Ifvis discontinuous at y then

Dvf(y) =lim h↓0

f(y+h)f(yh) v(y+h)v(yh).

A more detailed description of these Markov processes can be found in[2]and[13].

Remark 1.3. Notice that if the limit of 1ε(x), asε↓0, is a smooth function then the limiting process X described by Theorem 1.2 coincides with (3).

We conclude the introduction with a useful example. Let us assume that(x)can be decomposed in three terms

(x) =V1ε(x) +V2ε(x) +V3ε(x), (14)

where the functionsViε(x), fori=1, 2, 3, satisfy the following conditions:

1. There exists a strictly positive, smooth function V1(x)>0 such that

1

εV

ε

1(x)→V1(x), asε↓0, (15)

uniformly inx ∈R.

2. There exists a nonnegative constantβ≥0 such that

1

εV

ε

2(x)→βχ{x>0}, asε↓0, (16)

uniformly for every connected subset ofRthat is away from an arbitrary small neighborhood of 0 and weakly within a neighborhood of 0. HereχAis the indicator function of the setA.

3.

1

εV

ε

3(x)→µδ0(x), asε↓0, (17)

in the weak sense. Hereµis a nonnegative constant andδ0(x)is the Dirac delta distribution

at 0.

4. Condition (9) holds.

Let us define α = V1(0). In this case the operator (11) and its domain of definition (12) for the limiting processX become

L f(x) =

(

1

2fx x(x) + 1 2

d

d x[ln(V1(x))]fx(x), x<0 1

2fx x(x) + 1 2

d

d x[ln(V1(x) +β)]fx(x), x>0,

(18)

and

D(A) ={ f : f ∈ Cc(R), with fx,fx x ∈ C(R\ {0}),

[α+β]fx(0+)αfx(0) = [2µ]L f(0) (19)

andL f(0) = lim

x→0+L f(x) =xlim0L f(x)}.

(6)

1. Vl,ε(x) =0 andVu,ε(x) =(x) =V1ε(x) +V2ε(x) +V3ε(x)where

2. V1ε(x) =εV1(x), whereV1(x)is any smooth, strictly positive function,

3. V2ε(x) =εV2(δx), such thatV2(δx)→βχ{x>0}, asε↓0

4. V3ε(x) = εδV3(xδ), such that 1δV3(δx)µδ0(x), asε↓0

5. and withδchosen such that δε3 ↓0 asε↓0.

Then, it can be easily verified that (15)-(17) and (9) are satisfied. Moreover, in this case, we have

µ=R∞

−∞V3(x)d x.

In section 2 we prove our main result assuming that we have all needed estimates. After the proof of Theorem 1.2, we state the result in the case that limε↓01ε(x) has more than one point of

discontinuity (Theorem 2.6). In section 3 we prove relative compactness oft (this follows basically from[8]) and we consider what happens outside a small neighborhood of x =0. In section 4 we estimate the expectation of the time it takes for the solution to (1) to leave the neighborhood of the point 0. The derivation of this estimate uses assumption (9). In section 5 we: (a) prove that the behavior of the process after it reaches x = 0 does not depend on where it came from, and (b)calculate the limiting exit probabilities of(Xεt,Ytε), from the left and from the right, of a small neighborhood ofx =0. The derivation of these estimates is composed of two main ingredients. The first one is the characterization of all one-dimensional Markov processes, that are continuous with probability one, by generalized second order operators introduced by Feller (see [2]; also [13]). The second one is a result of Khasminskii on invariant measures[11].

Lastly, we would like to mention here that one can similarly consider narrow tubes, i.e. y x Rn

forn>1, and prove a result similar to Theorem 1.2.

2

Proof of the Main Theorem

Before proving Theorem 1.2 we introduce some notation and formulate the necessary lemmas. The lemmas are proved in sections 3 to 5.

In this and the following sections we will denote by C0 any unimportant constants that do not

depend on any small parameter. The constants may change from place to place though, but they will always be denoted by the sameC0.

For anyB⊂R, we define the Markov timeτ(B) =τεx,y(B)to be:

τxε,y(B) =inf{t>0 :Xtε,x,y/B}. (20)

Moreover, forκ >0, the termτεx,y(±κ)will denote the Markov timeτεx,y(κ,κ). In addition,Eεx,y

will denote the expected value associated with the probability measurePεx,y that is induced by the process(Xεt,x,y,Ytε,x,y).

For the sake of notational convenience we define the operators

Lf(x) = DvDuf(x)forx <0

(7)

Furthermore, when we write(x,y)A×B we will mean(x,y)∈ {(x,y):x A,y B}.

Most of the processes, Markov times and sets that will be mentioned below will depend onε. For notational convenience however, we shall not always incorporate this dependence into the notation. So the reader should be careful to distinguish between objects that depend and do not depend onε.

Throughout this paper 0< κ0< κwill be small positive constants. We may not always mention the

relation between these parameters but we will always assume it. Moreoverκη will denote a small positive number that depends on another small positive numberη.

Moreover, one can write down the normal vectorγε(x,y)explicitly:

γε(x,y) =

  

1 p

1+[Vxu,ε(x)]2

(Vxu,ε(x),1), y =Vu,ε(x)

1

p

1+[Vxl,ε(x)]2

(Vxl,ε(x), 1), y =Vl,ε(x).

Lemma 2.1. For any(x,y)Dε, letQεx,y be the family of distributions of X·ε in the spaceC[0,) of continuous functions [0,) R that correspond to the probabilities Pεx,y. Assume that for any (x,y) the family of distributionsQεx,y for allε∈(0, 1) is tight. Moreover suppose that for any compact set KR, any function f ∈ D(A)and for everyλ >0we have:

Eεx,y

Z ∞

0

eλt[λf(Xtε)L f(Xεt)]d tf(x)0, (22)

asε↓0, uniformly in(x,y)K×x.

The measuresQεx,y corresponding toPεx,y converge weakly to the probability measurePx that is induced by X· asε↓0.

ƒ

Lemma 2.2. The family of distributionsQεx,y in the space of continuous functions[0,)R corre-sponding toPεx,y for small nonzeroεis tight.

ƒ

Lemma 2.3. Let0< x1< x2 be fixed positive numbers and f be a three times continuously differen-tiable function in[x1,x2]. Then for everyλ >0:

Eεx,y[e−λτε(x1,x2)f(Xε

τε(x 1,x2)) +

Z τε(x1,x2)

0

eλt[λf(Xtε)L f(Xεt)]d t]f(x),

as ε ↓ 0, uniformly in (x,y) such that (x,y) [x1,x2]× Dεx. The statement holds true for τε(x2,−x1)in place ofτε(x1,x2)as well.

(8)

Lemma 2.4. Defineθ = v(0+)−v(0−)

[u(0+)]−1+[u(0)]−1. For everyη >0there exists aκη>0such that for every 0< κ < κηand for sufficiently smallε

|Eεx,yτε(±κ)κθ| ≤κη,

for all(x,y)[κ,κ]×x. Here,τε(±κ) =τε(κ,κ)is the exit time from the interval(κ,κ).

ƒ

Lemma 2.5. Define p+ = [u

(0+)]−1

[u(0+)]−1+[u(0)]−1 and p =

[u′(0−)]−1

[u(0+)]−1+[u(0)]−1. For every η > 0 there

exists a κη > 0 such that for every 0 < κ < κη there exists a positive κ0 = κ0(κ) such that for

sufficiently smallε

|Pεx,y{Xτεε(±κ)=κ} −p+| ≤ η, |Pεx,y{Xτεε(±κ)=−κ} −p| ≤ η,

for all(x,y)such that(x,y)[κ0,κ0]×Dεx.

ƒ

Proof of Theorem 1.2. We will make use of Lemma 2.1. The tightness required in Lemma 2.1 is the statement of Lemma 2.2. Thus it remains to prove that (22) holds.

Letλ >0,(x,y) and f ∈ D(A)be fixed. In addition letη >0 be an arbitrary positive number. Choose 0< x<∞so that

Eεx,yeλτ˜ε< η

kfk+λ−1kλf L fk (23)

for sufficiently smallε, where ˜τε=inf{t>0 :|t| ≥ x}. It is Lemma 2.2 that makes such a choice possible. We assume thatx>|x|.

To prove (22) it is enough to show that for everyη >0 there exists anε0>0, independent of(x,y),

such that for every 0< ε < ε0:

|Eεx,y[e−λτ˜f(Xτε˜)f(x) +

Z τ˜

0

eλt[λf(Xεt)L f(Xtε)]d t]|< η. (24)

Chooseε small and 0< κ0 < κ small positive numbers. We consider two cycles of Markov times {σn}and{τn}such that:

0=σ0≤τ1≤σ1≤τ2≤. . .

where:

τn = τ˜∧inf{t> σn−1:|Xεt| ≥κ}

σn = τ˜∧inf{t> τn:|Xεt| ∈ {κ0,x∗}} (25)

In figure 1 we see a trajectory of the process Ztε= (Xεt,Ytε)along with its associated Markov chains

{Zσε

n}and{Z

ε

(9)

Figure 1: z= (x,y)is the initial point andZtε= (Xεt,Ytε).

We denote byχAthe indicator function of the set A. The difference in (24) can be represented as the sum over time intervals fromσn toτn+1and fromτntoσn. It is equal to:

The formally infinite sums are finite for every trajectory for which ˜τ <∞. Assuming that we can write the expectation of the infinite sums as the infinite sum of the expectations, the latter equality becomes

(10)

Indeed, by Markov property we have:

However we need to know how the sums in (29) behave in terms ofκ. To this end we apply Lemma 2.3 to the function gthat is the solution to

λgL±g = 0 in x(±κ0,±x∗) g(±κ0) = 1

g(±x) = 0 (30)

By Lemma 2.3 we know that g(x)approximates φ1ε(x,y)for|x| ∈[κ0,x∗]asε↓0. The idea is to boundφε1(x,y)usingg(x). It follows by (30) (for more details see the related discussion in section 8.3 of[6], page 306) that there exists a positive constantC0 that is independent ofεand a positive

constantκ′ such that for everyκ < κ′ and for allκ0< κ

0(κ)we have g(±κ)≤1−C0κ.

So we conclude forεandκsufficiently small that

By the strong Markov property with respect to the Markov timesτnandσn equality (27) becomes

(11)

Because of (31), equality (32) becomes

|Eεx,y[e−λτ˜f(Xτε˜)f(x) +

Z τ˜

0

eλt[λf(Xtε)L f(Xεt)]d t]| ≤

≤ |φε2(x,y)|+C0

κ

–

max |x|=κ0,yDεx

|φ2ε(x,y)|+ max |x|=κ,y

x

|φ3ε(x,y)|

™

(35)

By Lemma 2.3 we get that max|x|=κ,y x|φ

ε

3(x,y)|is arbitrarily small for sufficiently smallε, so

C0

κ |x|=maxκ,y x

|φ3ε(x,y)| ≤ η

3 (36)

Therefore, it remains to consider the terms |φ2ε(x,y)|, where (x,y) is the initial point, and

1

κmax|x|=κ0,yDεx|φ

ε

2(x,y)|.

Firstly, we consider the term |φ2ε(x,y)|, where(x,y) is the initial point. Clearly, if |x| > κ, then Lemma 2.3 implies that|φ2ε(x,y)|is arbitrarily small for sufficiently smallε, so

|φ2ε(x,y)| ≤ η

3. (37)

We consider now the case|x| ≤κ. Clearly, in this case Lemma 2.3 does not apply. However, one can use the continuity of f and Lemma 2.4, as the following calculations show. We have:

|φ2ε(x,y)| ≤Eεx,y|f(Xτε

1)−f(x)|+|λkfk+kλfL fk|E

ε

x,y

Z τ1

0

eλtd t.

Choose now a positiveκ′so that

|f(x) f(0)|< η

6, for all|x| ≤κ

and that

Eεx,y

Z τ1

0

eλtd tη

6[λkfk+kλf L fk]

for sufficiently smallεand for all|x| ≤κ′. Therefore, forκκ′and for sufficiently smallεwe have

|φ2ε(x,y)| ≤ η

3, for all(x,y)∈[−κ,κD ε

x. (38)

Secondly, we consider the term 1κmax|x|=κ

0,yDεx|φ

ε

2(x,y)|. Here, we need a sharper estimate

be-cause of the factor κ1. We will prove that for(x,y)∈ {±κ0} ×D±εκ0 and forεsufficiently small

|φε2(x,y)| ≤κ η

(12)

For(x,y)∈ {±κ0} ×±κ0 we have

|φ2ε(x,y)| = |Eεx,y[e−λτ1f(Xε

τ1)− f(x) +

Z τ1

0

eλt[λf(Xtε)L f(Xεt)]d t]|

≤ |Eεx,y[f(Xτε

1)−f(x)−κθL f(0)]|+

+|Eεx,y[τ1L f(0)−

Z τ1

0

eλtL f(Xεt)d t]|+

+|Eεx,y[κθL f(0)τ1L f(0)|+

+|Eεx,y[e−λτ1f(Xε

τ1)−f(X

ε τ1) +

Z τ1

0

eλtλf(Xεt)d t]|, (40)

whereθ= v(0+)−v(0−)

[u(0+)]−1+[u(0)]−1.

Since the one-sided derivatives of f exist, we may choose, a positiveκηsuch that for every 0< κ1≤ κη

|f(w)− f(0)

wfx(0+)|,|

f(w) f(0)

wfx(0−)| ≤

η

C0

, (41)

for allw∈(0,κ1).

Furthermore, by Lemma 2.5 we can choose for sufficiently smallκ2>0, aκ0(κ2)∈(0,κ2)such that

for sufficiently smallε

|Pεx,y{Xτεε

1(±κ2)=±κ2} −p±| ≤ η

C0

(42)

for all(x,y)such that(x,y)[κ0,κ0]×Dεx.

In addition, by Lemma 2.4 we can choose for sufficiently smallκη>0, aκ3∈(0,κη)such that for sufficiently smallε

|Eεx,yτε1(±κ3)−κ3θ| ≤κ3 η

C0 (43)

for all(x,y)[κ3,κ3]×Dεx.

Choose now 0< κ≤min{κ1,κ2,κ3}and 0< κ0<min{κ0(κ2),κ}. For sufficiently smallεand for all(x,y)∈ {±κ0} ×±κ0 we have

|Eεx,yf(Xτε

1)−f(x)κθL f(0)| ≤

≤ |p+[f(κ)f(0)] +p[f(κ)f(0)]κθL f(0)|+ + |Pεx,y{Xτεε

1(±κ)=κ} −p+||f(κ)−f(0)|+

+ |Pεx,y{Xτεε

1(±κ)=−κ} −p−||f(−κ)−f(0)|+

+ |f(0) f(x)| (44)

Because of (41) and the gluing condition p+fx(0+)−pfx(0−) =θL f(0), the first summand on

the right hand side of (44) satisfies

|p+[f(κ) f(0)] +p[f(κ)f(0)]κθL f(0)| ≤

≤ |p+κfx(0+)−pκfx(0−)−κθL f(0)|+κ η

C0 =

= κη

(13)

Moreover, for small enough x∈ {±κ0,±κ}we also have that

|f(x)f(0)| ≤ |x||fx(0±)|+|x|η. (46)

The latter together with (42) imply that for sufficiently smallε the second summand on the right hand side of (44) satisfies

|Pxε,y{Xτεε 1(±κ)

=κ} −p+||f(κ)− f(0)| ≤κ η

C0. (47)

A similar expression holds for the third summand on the right hand side of (44) as well. Therefore (45)-(47) and the fact that we takeκ0to be much smaller thanκimply that for all(x,y)∈ {±κ0} ×

D±εκ

0 and forεsufficiently small, we have

|Eεx,yf(Xτε

1)−f(x)κθL f(0)| ≤κ η

C0

. (48)

The second term of the right hand side of (40) can also be bounded byκCη

0 forκandεsufficiently

small, as the following calculations show. For(x,y)∈ {±κ0} ×±κ0 we have

|Eεx,y[τ1L f(0)−

Z τ1

0

eλtL f(Xεt)d t]| ≤

≤ |L f(0)||Eεx,y[τ1−

Z τ1

0

eλtd t]|+ sup |x|≤κ|

L f(x)L f(0)|Eεx,yτ1≤ (49)

λ|L f(0)|Eεx,yτ1

 

 sup

(x,y)∈{±κ0}×D±εκ0

Eεx,yτ1

  + sup

|x|≤κ|

L f(x)L f(0)|Eεx,yτ1

Therefore, Lemma 2.4 (in particular (43)) and the continuity of the function L f give us forκandε

sufficiently small that

|Eεx,y[τ1L f(0)−

Z τ1

0

eλtL f(Xεt)d t]| ≤κη

C0

. (50)

The third term of the right hand side of (40) is clearly bounded byκCη

0 forε sufficiently small by

Lemma 2.4. As far as the fourth term of the right hand side of (40) is concerned, one can use the continuity of f together with Lemma 2.4.

The latter, (48), (50) and (40) finally give us that

1

κ|x|=maxκ0,y x

|φ2ε(x,y)]| ≤ η

3C0. (51)

Of course, the constantsC0 that appear in the relations above are not the same, but for notational

convenience they are all denoted by the same symbolC0.

So, we finally get by (51), (36), (37), (38) and (35) that

|Eεx,y[e−λτ˜f(Xτε˜)f(x) +

Z τ˜

0

eλt[λf(Xεt)L f(Xtε)]d t]| ≤η. (52)

(14)

In case limε01ε(x)has more than one points of discontinuity, one can similarly prove the follow-ing theorem. Hence, the limitfollow-ing Markov processX may be asymmetric at some pointx1, have delay

at some other pointx2or have both irregularities at another point x3.

Theorem 2.6. Let us assume that 1ε(x) has a finite number of discontinuities, as described by (5)-(9), at xi for i∈ {1,· · ·,m}. Let X be the solution to the martingale problem for

A={(f,L f):f ∈ D(A)}

with

L f(x) =DvDuf(x)

and

D(A) ={ f : f ∈ Cc(R), with fx,fx x ∈ C(R\ {x1,· · ·,xm})

[u′(xi+)]−1fx(xi+)−[u′(xi−)]−1fx(xi−) = [v(xi+)−v(xi−)]L f(xi)

and L f(xi) = lim

xxi+

L f(x) = lim

xxi

L f(x)for i∈ {1,· · ·,m}},

Then we have

X·ε−→X·weakly inC0T, for any T<∞, asε↓0.

ƒ

3

Proof of Lemmata 2.1, 2.2 and 2.3

Proof of Lemma 2.1. The proof is very similar to the proof of Lemma 8.3.1 in [6], so it will not be repeated here.

Proof of Lemma 2.2. The tool that is used to establish tightness of Pε is the martingale-problem approach of Stroock-Varadhan[15]. In particular we can apply Theorem 2.1 of[8]. The proof is almost identical to the part of the proof of Theorem 6.1 in[8]where pre-compactness is proven for the Wiener process with reflection in narrow-branching tubes.

Before proving Lemma 2.3 we introduce the following diffusion process. Let Xˆtε be the one-dimensional process that is the solution to:

ˆ

Xtε= x+Wt1+

Z t

0

1

2

Vxε( ˆXsε)

( ˆXε

s)

ds, (53)

whereVxε(x) = d Vd xε(x). The processXˆεis solution to the martingale problem forAˆε={(fLεf): f

D( ˆ)}with

ˆLε= 1 2

d2

d x2 + 1 2

Vxε(·) (·)

d

d x. (54)

and

(15)

A simple calculation shows that

ˆLεf(x) =D

vεDuεf(x).

where the(x)and (x)functions are defined by (4). This representation of(x) and(x) is unique up to multiplicative and additive constants. In fact one can multiply one of these functions by some constant and divide the other function by the same constant or add a constant to either of them.

Using the results in[9]one can show (see Theorem 4.4 in[10]) that

ˆ

X·ε−→X· weakly inC0T, for anyT <∞, asε↓0, (56)

whereX is the limiting process with operator defined by (10).

Proof of Lemma 2.3. We prove the lemma just for x∈[x1,x2]. Clearly, the proof forx ∈[−x2,−x1]

is the same.

We claim that it is sufficient to prove that

|Eεx,y[e−λτε(x1,x2)f(Xε

τε(x 1,x2)) +

Z τε(x1,x2)

0

eλt[λf(Xtε)ˆLεf(Xtε)]d t]f(x)| →0, (57)

asε↓0, whereˆis defined in (54). The left hand side of (57) is meaningful since f is sufficiently smooth for x∈[x1,x2].

We observe that:

|Eεx,y[e−λτε(x1,x2)f(Xε

τε(x 1,x2)) +

Z τε(x1,x2)

0

eλt[λf(Xεt)L f(Xεt)]d t]f(x)|

≤ |Eεx,y[e−λτε(x1,x2)f(Xε

τε(x 1,x2)) +

Z τε(x1,x2)

0

eλt[λf(Xεt)ˆLεf(Xεt)]d t]f(x)|

+|Eεx,y

Z τε(x1,x2)

0

eλt[L f(Xεt)ˆLεf(Xεt)]d t| (58)

Now we claim that

LεfL fk[x1,x2]→0, asε↓0, (59)

where for any function g we define kgk[x1,x2] = supx∈[x1,x2]|g(x)|. This follows directly by our

assumptions on the function(x). Therefore, it is indeed enough to prove (57).

By the Itô formula applied to the functioneλtf(x)we immediately get that (57) is equivalent to

|Eεx,y[

Z τε(x1,x2)

0

eλufx(X)γ

ε

1(X

ε

u,Y

ε

u)d L

ε

u

Z τε(x1,x2)

0

1 2e

λu[f x

Vxε

](X ε

u)du]| →0, (60)

asε↓0. We can estimate the left hand side of (60) as in Lemma 2.1 of[5]: Consider the auxiliary function

(x,y) = 1 2y

2f x(x)

Vxε(x)

(x)+y fx(x)

Vxu,ε(x)Vl,ε(x)Vxl,ε(x)Vu,ε(x)

(16)

It is easy to see that is a solution to the P.D.E.

it is far away from the point of discontinuity. Taking into account the latter and the definition of (x,y)by (61) we get that the first three terms in the right hand side of (64) are bounded byε2C0 forε small enough. So, it remains to consider the last term, i.e. the integral in local time. First of all it is easy to see that there exists aC0>0 such that

As far as the integral in local time on the right hand side of (65) is concerned, we claim that there exists anε0>0 and aC0>0 such that for allε < ε0

Eεx,y

Z τε(x1,x2)

0

(17)

Hence, taking into account (65) and (66) we get that the right hand side of (64) converges to zero. So the convergence (60) holds.

It remains to prove the claim (66). This can be done as in Lemma 2.2 of[5]. In particular, one considers the auxiliary function

(x,y) = y2 1 (x)+ y

Vl,ε(x)Vu,ε(x) (x) .

It is easy to see thatis a solution to the P.D.E.

y y(x,y) = 2

(x), yD ε

x

∂ywε(x,y)

∂nε(x,y) = −1, y∂D ε

x, (67)

where(x,y) = γε2(x,y)

|γε 2(x,y)|

and xRis a parameter.

The claim follows now directly by applying Itô formula to the functioneλtwε(x,y).

4

Proof of Lemma 2.4

Proof of Lemma 2.4. Let τˆε = ˆτε(±κ) be the exit time of Xˆtε (see (53)) from the interval (κ,κ). Denote also byEˆx the mathematical expectation related to the probability law induced byXˆεt,x.

Letη be a positive number, κ >0 be a small positive number and consider x0 with|x0| < κand y0x

0. We want to estimate the difference

|Eεx

0,y0τ

ε(

±κ)εx

0τˆ

ε(

±κ)|.

As it can be derived by Theorem 2.5.1 of[3]the functionφε(x,y) =Eεx,yτε(±κ)is solution to

1

2△φ

ε(x,y) =

−1 in (x,y)(κ,κ)×x

φε(±κ,y) = 0

∂ φε

∂ γε(x,y) = 0 on (x,y)∈(−κ,κ∂D ε

x (68)

Moreover,φˆε(x) = ˆEεxτˆε(±κ)is solution to

1 2

ˆ

φεx x(x) +1 2

Vxε(x) (x)φˆ

ε

x(x) = −1 in x ∈(−κ,κ)

ˆ

φε(±κ) = 0 (69)

Let (x,y) =φε(x,y)φˆε(x). Then will satisfy

1 2△f

ε(x,y) = 1

2 Vxε(x)

(x)φˆ ε

x(x) in (x,y)∈(−κ,κD

ε

x

(±κ,y) = 0 (70)

∂fε

∂ γε(x,y) = −φˆ ε

x(x)γ

ε

1(x,y) on (x,y)∈(−κ,κ∂D

ε

(18)

By applying Itô formula to the function and recalling that satisfies (70) we get that

We can estimate the right hand side of (71) similarly to the left hand side of (60) of Lemma 2.3 (see also Lemma 2.1 in[5]). Consider the auxiliary function

(x,y) = 1 upper bound for the right hand side of (71) that is the same to the right hand side of (64) with

λ=0,replaced byandτε(x1,x2)replaced byτεκ). Namely, for(x,y) = (x0,y0), we have hand side of (74) can be made arbitrarily small forεsufficiently small. Forεsmall enough the two integral terms of the right hand side of (74) are bounded byC0ξεEεx0,y0τ

ε(

±κ), whereξεis defined in (9). The local time integral can be treated as in Lemma 2.2 of [5], so it will not be repeated here (see also the end of the proof of Lemma 2.3). In reference to the latter, we mention that the singularity at the pointx=0 complicates a bit the situation. However, assumption (9) allows one to follow the procedure mentioned and derive the aforementioned estimate for the local time integral. Hence, we have the following upper bound for (x0,y0)

|(x0,y0)| ≤C0[κη+ξεEεx0,y0τ

ε(

(19)

Moreover, it follows from (75) (see also[10]) that forη > 0 there exists a κη> 0 such that for every 0< κ < κη, for sufficiently smallεand for allx with|x| ≤κ

1

κ|Eˆ

ε

ˆ

ε(

±κ)κθ| ≤η (77)

,whereθ= [u(0+)]v(0+)1−+[vu(0′(0))]−1.

Therefore, sinceξε↓0 (by assumption (9)), (76) and (77) give us for sufficiently smallεthat

1

κ|E

ε

x0,y0τ

ε(

±κ)εx

0τˆ

ε(

±κ)| ≤C0η.

The latter and (77) conclude the proof of the lemma.

5

Proof of Lemma 2.5

In order to prove Lemma 2.5 we will make use of a result regarding the invariant measures of the associated Markov chains (see Lemma 5.2) and a result regarding the strong Markov character of the limiting process (see Lemma 5.4 and the beginning of the proof of Lemma 2.5).

Of course, since the gluing conditions at 0 are of local character, it is sufficient to consider not the whole domain , but just the part of that is in the neighborhood of x =0. Thus, we consider the process(Xtε,Ytε)in

Ξε={(x,y):|x| ≤1,yx}

that reflects normally onΞε.

Recall that 0< κ0< κ. Define the set

Γκ={(x,y):x =κ,yκ} (78)

For notational convenience we will writeΓ±κ= ΓκΓκ. Define∆κ= Γ±κΓ±(1κ)and∆κ

0= Γ±κ0∪Γ±(1−κ0). We consider two cycles of Markov times{τn}

and{σn}such that:

0=σ0≤τ0< σ1< τ1< σ2< τ2<. . .

where:

τn = inf{tσn:(X,Y

ε

t )∈∆κ}

σn = inf{tτn−1:(Xεt,Y

ε

t )∈∆κ0} (79)

We will use the relations

µε(A) =

Z

κ

νεκ(d x,d y)Eεx,y

Z τ1

0 χ[(

t,Ytε)∈A]d t

=

Z

κ0 νκ0

ε (d x,d y)E ε

x,y

Z σ1

0 χ[(

(20)

and

between the invariant measuresµε of the process(X,Y

ε

t )in D

ε,νκ

ε andν

κ0

ε of the Markov chains (Xτε

invariant measure is, up to a constant, the Lebesgue measure on.

Formula (80) implies the corresponding equality for the integrals with respect to the invariant mea-sureµε. This means that ifΨ(x,y)is an integrable function, then

The following simple lemma will be used in the proof of Lemmata 5.2 and 5.4.

Lemma 5.1. Let0< x1 < x2,ψbe a function defined in[x1,x2]andφ be a function defined on x1

Proof. This lemma is similar to Lemma 8.4.6 of[6], so we briefly outline its proof. First one proves thatEεx,yτ(x1,x2)is bounded inεforεsmall enough and for all(x,y)[x1,x2]×x. The latter and the proof of Lemma 2.3 show that in this case we can takeλ=0 in Lemma (2.3) and apply it to the function gthat is the solution to

DvDug(x) = −ψ(x), x1< x<x2

g(xi) = φ(xi), i=1, 2.

(21)

Lemma 5.2 below characterizes the asymptotics of the invariant measuresνεκandνκ0

ε .

Lemma 5.2. Let v be the function defined by (6). The following statements hold

lim

Proof. We calculate the asymptotics of the invariant measuresνεκ andνκ0

ε by selecting Ψproperly. For this purpose let us choose Ψ(x,y) = Ψ(x) (i.e. it is a function only of x) that is bounded, continuous and is 0 outsideκ <x <1κ.

With this choice forΨthe left hand side of (82) becomes:

Z Z

Moreover, the particular choice ifΨ, also implies thatRστ1

1Ψ(X

Next, we express the right hand side of (85) through the v andufunctions (defined by (6)) using Lemma 5.1. We start with the termEεκ,yR0σ1Ψ(Xtε)d t. Use Lemma 5.1 withφ=0 andψ(x) = Ψ(x). For sufficiently smallεwe have

(22)

Taking into account relations (82) and (84)-(87) and the particular choice of the functionΨwe get the following relation for sufficiently smallε

ε arbitrary continuous functionΨif the following hold:

lim

Thus, the proof of Lemma 5.2 is complete.

Remark 5.3. Equalities (81) immediately give us that νεκ±κ) = νκ0

and(b): the y−component of the process converges to zero.

Proof of Lemma 2.5. Firstly, we prove (using Lemma 5.4) that Pεx,y(Xτε(

±κ)=κ) has approximately

(23)

uniformly in y1,y2.

Let us define for notational convenience

(x,y) =Eεx,y[χ(

The last two terms in the right hand side of the inequality above converge to zero as ε↓0 by the discussion above. Hence, it remains to prove that

|(κ0, 0)−(−κ0, 0)| →0 asε↓0. (92)

Let us choose someκ0 such that 0< κ0< κ

0< κ. Obviously, if the process starts from some point

onΓx withx[κ,κ0]thenτ(−κ,κ

Using the latter, we have

(24)

Relation (91) holds withκ0in place ofκ0 as well. Namely, for(κ

We show now how the second term on the right hand side of (94) can be made arbitrarily small. For ε sufficiently small and for κ0 much smaller than a small κ, it can be shown that

Pεκ0,0[(Xε

] are arbitrarily close to 1.

This can be done by an argument similar to the one that was used to prove Lemma 2.4. Similarly to there, one estimates the difference

|Pεκ0,0[(Xε

and uses the corresponding estimate for Pˆεκ

0[ ˆX

ε

ˆ

τ(−κ,κ0) = κ

0] for ε sufficiently small, where

ˆ

t is the process defined by (53). One also needs to use Lemma 2.4. The treatment of

Pεκ0,0[(Xε

τ(−κ,κ0),Y

ε

τ(−κ,κ0)) ∈ Γκ′0

] is almost identical with the obvious changes. We will not

re-peat the lengthy, but straightforward calculations here. Hence, the second term on the right hand side of (94) can be made arbitrarily small.

Using the above we finally get that

(25)

The numbers νκ0

ε±κ0) and ν

κ

ε(Γ±κ) are strictly positive. This is because, starting from each of these sets, there is a positive probability to reach every otherΓ±κ

0,Γ±κ in a finite number of cycles.

We introduce the averages:

p+(ε) =

R

κ

0 νκ0

ε (d x,d y)Pεx,y[(X

ε τ1,Y

ε

τ1)∈Γκ] νκ

ε(Γ±κ)

. (97)

By relation (96) we know that

|Pεx,y(Xτε(±κ)=κ)p+(ε)| ≤η (98)

for all(x,y)such that(x,y)[κ0,κ0]×Dεx forεsufficiently small.

Moreover using the first of equations (81) we see that (97) can be written as

p+(ε) =

νεκκ)

νκ ε(Γ±κ)

. (99)

Furthermore, we for 0< κ0< κsufficiently small we have

u(κ)u(κ0)∼u′(0+)(κκ0).

The latter and Lemma 5.2 imply for sufficiently small 0< κ0< κand sufficiently smallεthat

νεκκ) 1 u(0+)

ε κκ0

. (100)

Similarly, we have for sufficiently small 0< κ0< κand sufficiently smallεthat

νεκκ) 1 u′(0)

ε κκ0

. (101)

Therefore, we finally get for sufficiently small 0< κ0< κand sufficiently smallεthat

νεκ±κ)[ 1 u(0+)+

1 u(0)]

ε κκ0

. (102)

Hence, by equations (99)-(102) we get for sufficiently small 0< κ0< κand sufficiently smallεthat

|p+(ε)−

[u′(0+)]−1

[u′(0+)]−1+ [u(0)]−1|< η

Similarly, we can prove that

max

(x,y)∈Ω0

Pεx,y(Xτε(±κ)=κ) min

(x,y)∈Ω0

Pεx,y(Xτε(±κ)=κ)0 asε↓0.

Then for

p(ε) = ν κ ε(Γ−κ)

νκ ε(Γ±κ)

,

we can obtain that

|p(ε) [u

(0)]−1

[u′(0+)]−1+ [u(0)]−1|< η

and that the corresponding relation (98) holds, i.e.

|Pεx,y(Xτε(±κ)=κ)p(ε)| ≤η

for all(x,y)such that(x,y)[κ0,κ0]×Dεx forεsufficiently small.

(26)

6

Acknowledgements

I would like to thank Professor Mark Freidlin for posing the problem and for helpful discussions and Professor Sandra Cerrai, Anastasia Voulgaraki and Hyejin Kim for helpful suggestions. I would also like to thank the anonymous referee for the constructive comments and suggestions that greatly improved the paper.

References

[1] S.N. Ethier, T.G. Kurtz, Markov processes: Characterization and Convergence, Wiley, New York, (1986). MR0838085

[2] W. Feller, Generalized second-order differential operators and their lateral conditions, Illinois Journal of Math. 1(1957) pp. 459-504. MR0092046

[3] M. Freidlin, Functional integration and partial diffferential equations, Princeton University Press, Princeton, NJ, (1985). MR0833742

[4] M. Freidlin, Markov processes and differential equations: Asymptotic Problems, Birkh¨ auser-Verlag, Basel, Boston, Berlin, (1996). MR1399081

[5] M.I. Freidlin, K. Spiliopoulos, Reaction-diffusion equations with nonlinear boundary conditions in narrow domains, Journal of Asymptotic Analysis, Vol. 59, Number 3-4, (2008), pp.227-249. MR2450360

[6] M.I. Freidlin, A.D. Wentzell, Random perturbations of dynamical systems, Second Edition, Springer-Verlag, New York, (1998). MR1652127

[7] M.I. Freidlin, A.D. Wentzell, Random perturbations of Hamiltonian systems, Memoirs of Amer-ican Mathematical Society, No. 523, (1994). MR1201269

[8] M.I. Freidlin, A.D. Wentzell, Diffusion processes on graphs and the averaging principle, The Annals of Probability, Vol. 21, No. 4, (1993),pp. 2215-2245. MR1245308

[9] M.I. Freidlin, A.D. Wentzell, Necessary and sufficient conditions for weak convergence of one-dimensional Markov process, The Dynkin Festschift: Markov Processes and their Applications, Birkh¨auser, (1994),pp. 95-109. MR1311713

[10] H. Kim, On continuous dependence of solutions to parabolic equations on coefficents, Asymp-totic Analysis, Vol. 62, No. 3, (2009), pp. 147-162. MR2521761

[11] R.Z. Khasminskii , Ergodic properties of recurrent diffusion processes and stabilization of the solution of the Cauchy problem for parabolic equations, Teor. Veroyatnost. i. Primenen. 5, (1960),pp. 179-196. MR0133871

(27)

[13] P. Mandl, Analytical treatment of one-dimensional Markov processes, Springer: Prague, Academia, (1968). MR0247667

[14] W. Rosenkrantz, Limit theorems for solutions to a class of stochastic differential equations, Indiana Univ. Math. J. 24, (1975), pp. 613-625. MR0368143

Gambar

Figure 1: z = (x, y) is the initial point and Zεt = (X εt , Y εt ).

Referensi

Dokumen terkait

assumed to be continuous, the corresponding Markov chain will have the weak Feller property. The fact that a process is topologically recurrent also implies that.. it has some

Using recent results on the behavior of multiple Wiener-Itô integrals based on Stein’s method, we prove Hsu-Robbins and Spitzer’s theorems for sequences of correlated random

General Markov process theory (see e.g. [7] section 10 for a concise treatment) says that any eternal multiplicative coalescent is a mixture of extreme eternal multipli-

In Section 2 we summarize results of Malliavin calculus, de- fine the two directional derivatives, in the Wiener and Poison random measure direction, prove their equivalence to

The overarching goal of this article is to prove the unique ergodicity of a class of nonlinear stochastic partial differential equations (SPDEs) driven by a finite number of

Second, we want to point out that this relationship could have been proved with less knowledge on the Q -process than required the proof of the theorem.. Consider any Markov process

Keywords and phrases: Capacity on Wiener space, quasi-sure analysis, Ornstein–Uhlenbeck process, Brownian

In Section 4.1, we show that under an assumption on the decay of the cluster size distribution of a process that dominates the forest-fire process, the forest-fire process with