• Tidak ada hasil yang ditemukan

getdoc71a6. 257KB Jun 04 2011 12:04:31 AM

N/A
N/A
Protected

Academic year: 2017

Membagikan "getdoc71a6. 257KB Jun 04 2011 12:04:31 AM"

Copied!
24
0
0

Teks penuh

(1)

El e c t ro n ic

Jo ur

n a l o

f P

r o

b a b il i t y

Vol. 14 (2009), Paper no. 80, pages 2328–2351. Journal URL

http://www.math.washington.edu/~ejpecp/

Asymptotic Growth Of Spatial Derivatives Of Isotropic

Flows

Holger van Bargen

Abstract

It is known from the multiplicative ergodic theorem that the norm of the derivative of certain stochastic flows at a previously fixed point grows exponentially fast in time as the flows evolves. We prove that this is also true if one takes the supremum over a bounded set of initial points. We give an explicit bound for the exponential growth rate which is far different from the lower bound coming from the Multiplicative Ergodic Theorem.

Key words: stochastic flows, isotropic Brownian flows, isotropic Ornstein-Uhlenbeck flows, asymptotic behavior of derivatives.

AMS 2000 Subject Classification:Primary 60G15, 60G60, 60F99.

Submitted to EJP on April 23, 2009, final version accepted October 9, 2009.

(2)

1

Introduction

The evolution of the diameter of a bounded set under the evolution of a stochastic flow has been

studied since the 1990’s (see[5],[6],[11],[17], [12]and the survey article[16]to name just a few

references). It is known to be linearly growing in time if the flow has a positive Lyapunov exponent.

Of course the considered diameter links to the the supremum of|φt(x)|ranging overxin a subset of

Rd. In the following we will consider the case where the flow is replaced by its spatial derivative. We

emphasize that we consider the asymptotics in time (the spatial asymptotics for a fixed time horizon

have been considered in[8]in a very general setting and in[2]in the particular one treated here).

If the flow has a positive top exponent it is known that the growth is at least exponentially fast which is then true even for a singleton (this follows directly from the Multiplicative Ergodic Theorem). We will show in the case of an isotropic Brownian flow (IBF) or an isotropic Ornstein-Uhlenbeck flow

(IOUF) that supx|logD xt

| grows at most linearly in time t where the supremum is taken over

x in a bounded subset ofRd no matter what the top Lyapunov exponent is. This shows that the

growth of the norm of the derivative is indeed at most exponentially fast but it also gives some

insight into the distance of Dφt(x) to singularity by bounding sup0≤t≤T t−1infxlog

D xt

from

below in the lim inf sense. This excludes super-exponential decay to singularity which might be of interest especially if the top exponent is negative. Exponential bounds on the growth of spatial

derivatives play a role in the proof of Pesin’s formula for stochastic flows (see[13]). It has also been

conjectured that this should yield a new proof of the fact that the diameter grows at most linearly in time (but there are much simpler proofs known for this - see the references given above). Despite the fact that we can come up with an upper bound for the exponential growth rate we make no claims about its optimality (and we conjecture that our bound is far from optimal).

1.1

Definition And Prerequisites

In this section we will recall the definition of an isotropic Brownian Flow (IBF) from [4] and an

Isotropic Ornstein-Uhlenbeck Flow (IOUF) from[2]. We will keep the convention of speaking of an

IOUF only if its driftcis not equal to zero. (see[2]or[3]for a discusion of this issue).

Definition 1.1(IBF and IOUF).

Let c > 0 and F(t,x,ω) be an isotropic Brownian field with a C4-covariance tensor i.e.

¬

Fi(·,x),Fj(·,y

t =t bi j(xy)where the function b(·) = bi j(·):R d

→Rd×d is four times continu-ously differentiable with bounded derivatives up to order four and preserved by rigid motions (see[4]

or [7]). We define the semimartingale field V(t,x,ω) := F(t,x,ω)c x t and an IOUF to be the solutionφ=φs,t(x,ω)of the Kunita-type stochastic differential equation (SDE)

φs,t(x) =x+ Z t

s

V(du,φs,u(x)) =x+ Z t

s

F(du,φs,u(x))−c Z t

s

φs,u(x)du. (1)

Note that the definition of b and[9, Theorem 3.1.3]imply

¬

∂lFi(·,x·),∂kFj(·,y·) ¶

t=− Z t

0

∂l∂kbi,j(xsys)ds. (2)

(3)

We call c the drift of φ and b the covariance tensor of φ. Note that we write xt for φt(x) =

φ0,t(x) = φ0,t(x,ω) and x1t, . . . ,x d

t for its components. We will write the spatial derivatives as

∂ix j t :=

∂xiφ

j

0,t(x)and D xt

:=

t(x)

. The same notations are used for yt =φ0,t(y,ω)etc..

The assumptions on the smoothness of b ensure that the local characteristics (b,c·) are smooth

enough to guarantee the existence of a solution flow to (1) which can be shown to be of classC3,δ

for arbitrary 1> δ >0 (see[9, Theorem 3.4.1]). In fact one has to choose a modification to get the

mentioned smoothness (which we do without change of notation). IOUFs have been studied in[7]

and in[2]and we will recall some facts that we use later.

Lemma 1.2(some finite-dimensional marginals).

Letφbe an IOUF with drift c and covariance tensor b and let x,y∈Rd. Then we have

1. φis a Brownian flow (i.e. it has independent increments) and its law is invariant under orthog-onal transformations.

2. The distance process{|xt yt|:tR+}solves the SDE

|xtyt|=|xy|+

Z t

0 p

2

1BL(|xsys|)

dWs

+

Z t

0

(d−1)1−BN(|xsys|)

|xsys|c|xsys|ds (3)

for a standard Brownian motion(Wt)t0. Therein BL and BN are the longitudinal and normal correlation functions of b respectively (see [4] or Lemma 1.3). There are constantsλ >0and

¯

σ >0such that the following is true.

(a) There is a standard Brownian motion (Wt)t≥0 such that we have a.s. for all t ≥ 0 that

|xtyt| ≤ |xy|¯sup0≤stWs+λt.

(b) We have for each x,yRd, T >0and q1that

E

–

sup

0≤tT|

xt yt|q

™1/q

≤2|xy|e(λ+12¯ 2)T

. (4)

3. For x∈Rd the spatial derivativejxi

t:=

∂ φi t(x)

d xj solves the following SDE.

∂jxti =δi j+ Z t

0 X

k

∂jxsk∂kFi ds,xs

c

Z t

0

∂jxitds. (5)

We will use the symbolPkas a shorthand forPdk=1(also for multiple summation indeces).

Proof:[7, Proposition 7.1.1, Corollary 7.1.1 and p. 139]and[16, condition (H)]. ƒ

Observe that we writeE[X]qfor E[X]q

which is different fromE[Xq]. We will use this

conven-tion throughout the whole paper. The following lemma gives some insight into the structure of b

which is known since[18].

Lemma 1.3(local properties of b).

(4)

1. b can be written as bi,j(x) =

¨

(BL(|x|)−BN(|x|)) xixj

|x|2 : for x6=0

δi,jBL(0) =δi,jBN(0) =δi,j : else

where BLand BNare

the so-called longitudinal and transversal correlation functions defined by BL(r) = bi,i(r ei),r

0and BN(r) = bi,i(r ej),r ≥ 0,i 6= j. BN and BL are bounded C4- functions from [0,∞) to

Rwith bounded derivatives up to order four. LettingβL,N :=B′′L,N(0)denote their right-hand derivatives at zero we have the Taylor-expansions BL,N(r) =1− 12βL,Nr2+O(r4)for r →+0.

BothβL andβN are strictly positive. Observe that our definition assumes the rigid-motion-part

of b to vanish (which is different from the definition in[10]where one has to assume α=1to be consistent with our notation).

2. The partial derivatives of b at0satisfy

∂k∂lbi,j(0) =12NβL)(δkiδl j+δk jδl i)−βNδklδi j.

3. There is0<¯r<1and C >0such that for x∈Rd with|x|<¯r we have |klbi,j(x)klbi,j(0)| ≤C|x|2.

Proof:[7, Proposition 1.2.2]. ƒ

Lemma 1.4(a lemma on real functions).

Let f,g:[0,∞) [0,∞)be increasing functions that are differentiable on (0,∞). Let further f be convex and g be concave. If we have for some t>0that f(t) g(t)and f′(t) g′(t)then we have for all st that f(s)g(s).

Proof: easy undergraduate exercise ƒ

The following result is the main tool that allows for the estimation of suprema of the derivatives.

Observe thatr+=r0 denotes the positive part ofrR.

Theorem 1.5(Chaining Growth Theorem).

Letψ:[0,)×Rd×Rd be a continuous random field satisfying

1. There are A > 0 and B 0 such that for all k > 0 and bounded S Rd we have

lim supT→∞ 1T log supxSsup0

tT|ψt(x)|q>kT —

≤ −(k−B)

2

+ 2A2 .

2. There existΛ0,σ >0,q0≥1and¯c>0such that for each x,y ∈Rd, T>0and even qq0

we have thatsup0

≤t≤T|ψt(x)−ψt(y)|q —1/q

¯

c|xy|e(Λ+12 2)T

. ¯c may depend on q and d but neither on|xy|nor on T .

LetΞbe a compact subset ofRd with box (see[16, page 19]for a definition; just note that a closed ball inRd has box dimension d) dimension>0. Then we have

lim supT→∞sup0tTsupxΞT1|ψt(x)| ≤ K a.s. where for Λ0 :=

σ2d

€d 2 −∆

Š

we put K :=

B+A

q

2∆

Λ +σ2∆ +pσ42+2∆Λσ2 : if ΛΛ 0

B+AÆ2∆dd

−∆

€

Λ +12σ2dŠ : otherwise

(5)

Proof: A complete proof can be found in[16]as Theorem 5.1. Observe that the change of „q1“

to „evenqq0“ does not alter the statement at all, because the assumptions above guarantee the

same assumptions forq≥1 with a change only in the value of ¯c. The fact that we allow ¯cto depend

on q does not play any role because the proof in [16] is perfectly valid with q-depending ¯c. We

will only briefly indicate what is involved in it. Choosing ε > 0 and a sufficiently small r0 > 0

one can coverΞwith at most eγT(∆+ε)subsets ofRd of diameter at most eγT < r0. Denote these

subsets byΞ1, . . . ,ΞeγT(∆+ε) For fixedT the assumption sup0tTsupxΞ 1

T|ψt(x)|> κimplies one of

the following to occur up to time T: one of theΞi gets diameter at least one or the center of aΞi

reaches a distance ofκT−1 from its original position. Hence we have forδ >0 that

P

–

sup

0≤t≤T

sup

x∈Ξ

|xt|

T > κT+δ

™

S1+S2 (6)

withS1:=eγT(∆+ε)max x∈Ξ

P

–

sup

0≤t≤T|

xt| ≥κT−1 ™

andS2:=eγT(∆+ε)max

i

P

–

sup

0≤tT

diamφti)≥1

™

.

The chaining based result[16, Theorem 3.1]now shows that the two-point condition is sufficient

to control S2 and the one-point-condition covers S1. In this way one obtains that the sum over

T ∈N of the right hand side of (6) is finite for some γand κwhich completes the proof via the

Borel-Cantelli-Lemma. The formula for the constant K is obtained via an optimization overγand

κ. ƒ

2

The Main Result

We are now ready to state the main result.

Theorem 2.1(exponential growth of spatial derivatives).

Let φ be an IOUF or an IBF with b and c as above and let Ξ be a compact subset of Rd with box dimension>0. Then

lim sup

T→∞ ‚

sup

0≤tT

sup

x∈Ξ 1

T|log

D xt

|

Œ

Ka.s. (7)

where K:=

B+A

q

2∆

Λ +σ2∆ +pσ42+2∆Λσ2 :ifΛΛ 0

B+AÆ2∆dd

−∆

€

Λ +12σ2dŠ : otherwise

forΛ0:= σ2d€d

2 −∆ Š

, A:=pβL, B:=

βL

2 +

|(d−1)βN−2c|

2 ,Λ:= Λ1∨Λ2∨Λ6 andσ:=σ1∨σ2∨σ6.

TheΛi andσi depend only on b and d and will be specified later.

Proof: This follows directly from Theorem 1.5 applied toψ:[0,)×Rd×R;

ψt(x,ω) := logD xt

if we can verify the following lemmas. We will only use ψ in the

mean-ing given above from now on. Note that we choose the matrix norm to be the Frobenius norm

||(ai,j)1≤i,jd||:= ( P

i,ja 2 i,j)

1/2for its computational simplicity although the special choice of a norm

(6)

Lemma 2.2(condition on the one-point motion).

We have for each bounded S⊂Rd that

lim sup

T→∞

1

T log supxS

P

–

sup

0≤tT|

ψt(x)|>kT

™

≤ −(kB)

2

+

2A2 (8)

for A and B as given in Theorem 2.1.

Lemma 2.3(condition on the two-point motion).

We have for each x,y∈Rd, T >0and even qq0:=

4pβL[2(d−1)βN−c−2βL]

128βL

3that

E

–

sup

0≤tT|

ψt(x)−ψt(y)|q ™1/q

≤¯c|xy|pqe(Λ+12 2)T

(9)

forΛandσas given in Theorem 2.1 and¯c:=¯c1+¯c2+¯c6. The ¯ci are constants that depend on b and

d and will be specified later.

The proofs of these lemmas will be given in the next sections. Observe that ¯cdoes not enter into the

constantK, so we do not need to pay attention to get a small value for it.

3

Proof Of Lemma 2.2: The One-Point Condition

Before proving Lemma 2.2 we will need to establish some facts onD xt

2

.

Lemma 3.1(SDE forD xt 2

).

Let xRd and put Mt :=2Rt

0 P

i,j,k∂kFi ds,xs

∂jxsk∂jxsi kD xsk

2. Then we have

1. (Mt)t≥0is a continuous local martingale with

2(βLβN)t+2(βL+βN)

Rt 0

P i,j,k,m

∂jxsk∂jxis∂mxks∂mxsi kD xsk

4 ds =〈Mt ≤4βLt a.s. for all t >0and

hence a true martingale.

2. We have the SDED xt 2

=d+2R0tPi,j,k∂kFi ds,xs

∂jxsk∂jxsi

+

(d1)βN+βL−2c Rt

0 D xs

2

ds

=d+Rt

0 D xs

2

d Ms+

(d−1)βN+βL−2c Rt

0 D xs

2

ds.

3. We haveDD x·

2E

t=2(βLβN) Rt

0 D xs

4

ds

+2(βN+βL) Rt

0 P

i,j,k,m∂jxsk∂jxsi∂mxks∂mxsids.

4. ψt(x) =log D xt

solves the SDE

ψt(x) = 12logd+12Mt+ h(

d−1)βN+βL

2 −c i

(7)

Proof: Lemma 1.2 and Itô’s formula imply forD xt

By Lemma 1.2 this is equal to

d+2X

dswhich is nothing but

dswe also get from (11) that

(8)

This together with 2. proves 3. and 1. also follows from this and the next proposition. ƒ

Proposition 3.2(a simple estimate).

We have

Proof: The proof is just the combination of the triangle inequality with Schwarz’ inequality. We leave the details to the reader. The reason why we state this fact as a proposition of its own is that we will

use it again. ƒ

Now we can turn to the proof of Lemma 2.2. Since we can writeMt=WM

t for a standard Brownian

Motion(Wt)t≥0we get with Lemma 3.1 that

where the latter means equality in distribution. Therefore we get for anyk>0 that

I:=P

We now only have to exclude the possibility that the modulus of the logarithm in

ψt(x) might become large due to a very small

D xt

. Observe, that (13) implies

(9)

we get with a completely analogous computation lim supT→∞ 1

T logP

inf0tTψt(x)≤ −kT

2β1 L

h

kβL−(d−1)βN+2c

2

i2

+. This completes the proof of Lemma 2.2.

ƒ

4

Proof Of Lemma 2.3: The Two-Point Condition

4.1

General Estimates And Preparation

We now turn to the proof of Lemma 2.3 and start with a sketch of proof since it is rather technical. It consists of four steps.

1. Derivation of a SDE forψt(x)−ψt(y)involvingX

(i j)

t :=

∂jxsi kD xsk −

∂jysi kD ysk

.

2. Derivation of a SDE for(X(ti j))q forq2.

3. Obtaining estimates of the type required in Lemma 2.3 forX(ti j). This is done for small|xy|

estimating the probability for very fast increase in the beginning and with a Grönwall type argument in the case this increase does not occur.

4. Obtaining the estimate forψt(x)−ψt(y)as integrated versions of the ones on theX(i j)using

the Burkholder-Davies-Gundy inequality.

Observe that we have by Lemma 3.1

ψt(x)−ψt(y) = Z t

0 X

i,j,k

∂kFi ds,xs ∂jx

k s∂jxsi D xs

2 −∂kF i ds,y

s ∂jy

k s∂jysi D ys

2

+(βL+βN)

2

Z t

0 X

i,j,k,m

jysk∂jysi∂mysk∂mysi

D ys

4 −

jxsk∂jxsi∂mxsk∂mxsi

D xs

4 ds

=: ˜Mt+At. (16)

To further analyze the latter we first prove the following lemma.

Lemma 4.1(general estimates forAt and ˜Mt).

WithM˜t and At defined as above the following holds.

1. A.s. we have for all t0that AtN+βL)t.

2. Introducing the abbreviation˜bi,lk,n(z):=∂k∂nbi,l(z)−∂k∂nbi,l(0)we can write for the quadratic

variation ofM˜

¬

˜

M

t=

βL+βN

2

Z t

0 X

i,k

 

X

j

∂jxsi∂jxsk D xs

2 −

∂jysi∂jysk D ys

2

  2

ds

+2 X

i,j,k,l,m,n Z t

0

∂jxsi∂jxsk∂mysl∂mysn

D xs

2 D ys

2 ˜b i,l

k,n xsys

d e˜c t

(10)

Proof: 1. is clear by definition ofAt and Proposition 3.2. Clearly¬M˜¶

t equals

− X

i,j,k,l,m,n Z t

0

∂jxsk∂jxsi∂mxsn∂mxsl

D xs

4 ∂k∂nbi,l(0)ds

+ X

i,j,k,l,m,n Z t

0

∂jxsk∂jxsi∂mysn∂mysl

D xs

2 D ys

2 ∂k∂nb i,l x

sys

ds

+ X

i,j,k,l,m,n Z t

0

∂jysk∂jysi∂mxsn∂mxsl

D ys

2 D xs

2 ∂k∂nb i,l y

sxs

ds

− X

i,j,k,l,m,n Z t

0

jysk∂jysi∂mysn∂mysl

D ys

4 ∂k∂nb

i,l(0)ds

= X

i,j,k,l,m,n Z t

0 1

2(βNβL)(δkiδnl+δklδni)−βNδknδil

∂jxsk∂jxsi∂mxsn∂mxsl

D xs

4 +

∂jysk∂jysi∂mysn∂mysl

D ys 4

ds

+2 X

i,j,k,l,m,n Z t

0

jxksjxsi∂mysn∂mysl

D xs

2 D ys

2 ∂k∂nb i,l x

sys

ds=:I I+I I I. (17)

Using I I = (βLβN)t+

βL+βN

2 Rt

0 P

i,j,k,m

∂jxsk∂jxis∂mxsk∂mxis kD xsk

4 +

∂jysk∂jysi∂mysk∂mysi kD ysk

4 ds Proposition 3.2 now

yieldsI I≤(βLβN)t+

(βN+βL)

2 2t =2βLtand sinceI I I≤2d

6max

i,l,k,nsupz∈Rd∂k∂nbi,l(z)tis clear

the first part of 2. follows. To prove the second part we have to rearrange (17) using the symmetry

and isotropy of of b.

¬

˜

M

t

= X

i,j,k,l,m,n Z t

0 

∂jxsi∂jxsk D xs

2 −

∂jysi∂jysk D ys

2

 

∂mxsn∂mxsl D xs

2 −

∂mysn∂mysl D ys

2

knbi,l(0)ds (18)

+2 X

i,j,k,l,m,n Z t

0

∂jxsi∂jxsk∂mysl∂mysn

D xs

2 D ys

2 ˜b i,l

k,n xsys

(11)

Since (again by Lemma 1.3) we have∂k∂nbi,l(0) = 12NβL)(δkiδnl+δklδni)−βNδknδil we get

thatI V equals

βLβN

This completes the proof of Lemma 4.1. ƒ

This Lemma shows that we have to control terms like ∂jxsi

kD xsk − ∂jysi kD ysk

. We postpone this until we will

have derived the following estimate from Lemma 4.1.

Lemma 4.2(a priori bounds for theψ-estimation ).

1. We have for each x,yRd, T >0and q1that

Proof: Once again observe that by the triangle inequality

(12)

that Stirling’s formula for the Gamma function implies

Thus 1. follows. For the proof of 2. it is sufficient to observe that since for any t 0 we have

tet/eandpt≤1/4+t we also get

each other we introduce the following stopping time. Let ¯r be chosen according to Lemma 1.3 and

˜r ¯r to be specified later. Remember that we assumed ¯r 1. We now define for x,y Rd with

|xy| ≤r the stopping time

τ:=inf

t>0{|xtyt| ≥˜r}

and assumer<˜rin the following (which ensuresτ >0 a.s.).

4.2

Derivation Of Formula H

We now proceed to work on ∂jxsi

kD xsk − ∂jysi kD ysk

by proving the following proposition on ∂jxsi

kD xsk

.

Proposition 4.3(SDE for the direction of the derivative ).

We have the SDE

∂jxsi

Proof: Since by Itô’s formula

(13)

we only have to note that by Lemmas 1.2 and 3.1

which combined with Lemmas 1.2 and 3.1 and put into (24) yields

∂jxit

This proves Proposition 4.3. ƒ

Of course the latter implies

(14)

Letting

we have to compute the cross variations to apply Itô’s formula for powers to (27).

V I It

and similarly

(15)

as well as

V I I·,V I I I· t

=βLβN

2

Z t

0

∂jxsi D xs

∂jy

i s D ys

!2

ds

+βL+βN

2

Z t

0 X

k,l

∂jxsk D xs

∂jy

k s D ys

! 

∂lxsk∂lxsi∂jxsi

D xs

3 −

∂lysk∂lysi∂jysi

D ys 3

ds

+ X

k,l,m,n Z t

0 

jxsk∂mysn∂mysl∂jysi

D xs

D ys

3 +

jysk∂mxsn∂mxsl∂jxsi

D ys

D xs

3

˜bi,l

k,n xsys

ds. (32)

The combination of (30), (31) and (32) with (27) and Itô’s formula now yields forX(ti j):= ∂jxit

kD xtk − ∂jyti

kD ytk

(16)

Proposition 4.4(formula H).

Proof: There is nothing left to show. ƒ

Proposition 4.4 will be useful to estimate the expectation in Lemma 2.3 on the event {T τ}

but since this requires some additional preparations we will first consider the reversed case in the following intermezzo.

4.3

Treating Small

|

x

y

|

And Large

T

Since we obviously have from Schwarz’ inequality that

E

it seems reasonable to compute some useful estimate for the tails of τ. We will also immediatly

specify conditions on r and ˜r. Assume first with Lemma 1.3 that ˜r <¯r is small enough to ensure

(17)

Lemma 4.5(tails ofτ).

(18)

Lemma 4.6(estimate for T afterτ).

128βLq then we just have to combine Lemmas 4.2

and 4.5 with (34) to obtain

E

which means we only have to consider the case T log

˜r

Thus the proof of Lemma 4.6 is complete. ƒ

(19)

Proposition 4.7(expectation of zero-mean-martingales on rare events).

For i,j∈ {1, . . . ,d}we defineMˇt=Mˇt(i j):=Rt

0X

(i j)q−1

s d V I It− Rt

0X

(i j)q−1

s d V I I It. Then we have

1. Mˇt is a zero-mean-martingale and we have any t0that¬Mˇ¶

t≤ˇc t a.s. forˇc:=4(d

2+2d4+

d6)maxk,l,m,nsupzRd∂k∂lbm,n(z).

2. For all0tT and qq0we get that

E

”

ˇ

Mt11{Tτ} —

≤ |xy|2q¯c3

˜r2qe

(Λ3+12σ23q+σ 2 4q

2)T

=: f(T)for¯c3:=

p

c

q ˇ c 128βL ∨1,

Λ3:= 1

2e3:= q

64βL

e

(128βLˇc)

1 4 andσ

4:= p

256βL.

Proof: 1. is clear from the definition of ˇM except for the value of ˇc. This value follows from (30),

(31) and (32) since¬Mˇ¶t ≤ 〈V I Ii+V I I It+2| 〈V I I,V I I It|. For the proof of 2. first assume

T log

˜

r

|xy|

128βLq. From Lemma 4.5 we know that

P[T > τ]≤ |xy|4q 2

˜

r4q. This combined with Schwarz’

inequality and 1. yields

E

”

ˇ

Mt11{Tτ} —

=

E

”

ˇ

Mt11{T} —

q

E”¬Mˇ¶

T —p

P[T> τ]

p

c T|xy|

2q

˜r2q ≤

p

c|xy|

2q

˜r2q e

T

2e (42)

which proves Proposition 4.7 in this case. We always have as above that

E

”

ˇ

Mt11{Tτ} —

p

ˇ

c T=:g(T)and so we can conclude for the sameT0 as before

g(T0) = È

ˇ

c

128βL

r

log ˜r

|xy|

È

ˇ

c

128βL

˜r

|xy|

1

2e

≤¯c3 ˜r

|x y|

σ23

128βL ≤¯c3

˜r

|xy|

Λ3+σ23q+σ24q2

128βL q |xy|

˜

r

2q

=|xy|2q ¯c3

˜r2qe

(Λ3+12σ 2 3q+σ

2 4q

2)T

0=: f(T

0) (43)

so now (with Lemma 1.4) we only have to establish the inequality g′(T0)≤ f′(T0)to complete the

proof of Proposition 4.7. The proof of this is as follows.

g′(T0) =1

2

s

128βLqˇc

log ˜r

|x−y|

≤ 1

2

p

128βLqˇc (44)

|xy|

˜r

2

Λ3+1

2σ

2 3q+σ

2 4q

2 ˜r

|xy|

Λ3+12σ23q+σ42q2

128βL

= f′(T0).

(20)

4.4

Evaluation Of Formula H

Now we are prepared to prove the following key result.

Lemma 4.8(first estimate forτafter T).

We have for even qq0if we assume˜r≤ p12 the following.

1. For h(t):=E

h

(maxi,jX(ti j)q∨ |xtyt|q)11{tτ} i

the estimate holds

h(t)≤ |xy|qc3d

2q

˜

r2q

e(Λ3+(λ∨12σ23)q+( 1 2σ¯

2σ2 4)q

2+d2κ

q)t

Λ3+ (λ∨12σ23)q+ (12σ¯2σ2

4)q2+d2κq

(45)

whereinκq behaves like a polynomial of degree2(w.r.t. growth of its modulus) in q and equals

|βLβN

4 q 2

d21βNq−12βLq|

+{C d6+

h 9

4(βN+βL) +2C i

d4+”32N+βL) +C —

d2+βN

2 d}q 2

−¦C d6+”2C32N+βL) —

d4+C d2+12βNd ©

q.

2.

E

max

i,j X

(i j)q

t 11{Tτ}

h(t)¯cr−2q|xy|qe(Λ5q+σ

2 5q

2)t

(46)

for¯c5:=supq≥3

c3d2q

Λ3+(λ∨12σ23)q+( 1 2σ¯

2σ2 4)q2+d2κq

,

Λ5:= Λ3+λ∨12σ32−nC d8+”2C−32N+βL) —

d6+C d4−|βNβL|

2 d 2oand

σ2 5 :=

1 2σ¯

2σ2 4+

n

C d8+h9

4(βN+βL) +2C i

d6+”3

2(βN+βL) +C —

d4

+βN

2 d

3+|βLβN|

4 d 2o.

Proof: 2. follows obviously from 1. so we only have to prove this. Since the treatment of Proposi-tion 4.4 leads to rather huge formulas we restrict ourselves to indicate how to computaProposi-tion is done

omitting terms when treating similar ones. We first multiply (33) with 11{t≤τ}then we take

expec-tations and apply Fubini’s theorem. Recalling the short-hand ˜bi,k,lj(z):=∂k∂lbi,j(z)−∂k∂lbi,j(0)we

get

E

h

X(ti j)q11{tτ}

i

=qE

h

11{tτ}Mˇt(i j)

i

+. . .qβN+βL

2

Z t

0

X(i j)q−1

s

X

k,l

∂jxsk∂lxsi∂lxsk

D xs

3 −

∂jysk∂lysi∂lysk

D ys 3

11{tτ}

ds+. . .

+q(q1)

Z t

0

E

Xs(i j)q−2 X

k,l

∂jxsk∂jysl

D xs D ys

˜bi,i

k,l xsys

11{tτ}

(21)

Applying the triangle inequality and using Lemma 1.3 we get

use that the expectation of a positive random variable is growing in the domain of integration and conclude that the latter is less or equal to

qE

Summing up all the terms we suppressed we get thatE

(22)

This enables us to conclude with Proposition 4.7 and Lemma 1.2

E

max

i,j X

(i j)q

t 11{t≤τ}∨ |xtyt|q11{t≤τ}

≤X

i,j

E

h

Xt(i j)q11{tτ}11{tτ} i

+E|xtyt|qE|xtyt|q

+d2qX

i,j

E

h

11{tτ}Mˇt(i j)

i

+d2κq Z t

0

E

max

i,j X

(i j)q

s ∨ |xsys|q

11{sτ}

ds

d2q|xy|2q ¯c3

˜

r2qe

(Λ3+12σ 2

3q+σ24q2)t+2q|xy|qe(λq+ 1 2q

2σ¯2)t

+d2κq

Z t

0

E

max

i,j X

(i j)q

s ∨ |xsys|q

11{s≤τ}

ds

d2κq Z t

0

E

max

i,j X

(i j)q

s ∨ |xsys|q

11{sτ}

ds

+|xy|q

‚

c3d2q

˜r2q

Œ

e(Λ3+(Λ∨12σ23)q+( 1 2σ¯

2σ2 4)q

2)t

(50)

since we assumed ˜r ≤ 2−1/2. This now implies via Grönwall’s inequality (see[15, I.§2.1] for an

appropriate version) that

h(t) |xy|qc3d

2q

˜

r2q

e(Λ3+(λ∨12σ23)q+( 1 2σ¯

2σ2

4)q2+d2κq)t Λ3+ (λ∨12σ2

3)q+ ( 1 2σ¯

2σ2

4)q2+d2κq

. (51)

The proof is complete. ƒ

The next lemma will be the last ingredient to the proof of Lemma 2.3.

Lemma 4.9(second estimate forτafterT).

We have for even qq0that

sup0

≤t≤T|ψt(x)−ψt(y)|q11{Tτ} —1

q

≤¯c6pq|x y|e(Λ6+12σ22q)T

withΛ6= Λ5+1

e,σ6:=

p

2σ5and

¯

c6:=supq≥3 ¯ c

1

q

5

˜ r2

p 2d2C

1

q q q

1

2pβL+βN+

p C d3C

1

q q q

1

2p2+2d4(βN+βL)

(Λ5q+σ52q2) 1

q

!

. Therein C

1

q

q is the largest positive zero

of the Hermite polynomial of order2q and can be estimated via C

1

q

qk p

4q+1 for some constant k>0(see[1]and[14]).

Proof: By the triangle inequality and the Burkholder-Davies-Gundy inequality, Lemma 4.1 and Jensen’s inequality we have

E

–

sup

0≤tT|

ψt(x)−ψt(y)|q11{Tτ} ™1

q ≤E

–

sup

0≤tT

˜

Mtq11{Tτ} ™1

q

+E

–

sup

0≤tT

Aqt11{Tτ} ™1

qC

1

q

qE •

M q

2

Tτ

˜1

q

+E”Aq

Tτ

—1

(23)

C

Another application of Fubini’s theorem now yields that the latter is less or equal to (remember that

we chose evenq3)

The proof is complete. ƒ

We now fix ˜r ¯r 2−1/2 subject to (35) and r e−1˜r and conclude that since

sup0≤t≤T|ψt(x) − ψt(y)|q is the sum of the terms sup0≤t≤T|ψt(x) − ψt(y)|q11{Tτ} and

sup0tT|ψt(x)− ψt(y)|q11{T} another triangle inequality with Lemmas 4.2, 4.6 and 4.9

completes the proof of Lemma 2.3 and hence of Theorem 2.1. ƒ

References

[1] B. Davis, On The Lp-Norms Of Stochastic Integrals And Other Martingales,Duke

Mathemati-cal Journal 434, (1976) MR0418219

[2] H. van Bargen and G. Dimitroff, Isotropic Ornstein-Uhlenbeck Flows, Stoch. Proc. Appl.,

(2008), doi10.1016/j.spa.2008.10.007 MR2531088

[3] H. van Bargen, Ruelle’s Inequality For Isotropic Ornstein-Uhlenbeck Flows,preprint, (2008)

[4] P. Baxendale and T. Harris: Isotropic Stochastic Flows,Ann. Probab.14, 1155-1179, (1986)

(24)

[5] M. Cranston, M. Scheutzow and D. Steinsaltz: Linear Expansion Of Isotropic Brownian Flows,

Electron. Commun. Probab.4, 91-101, (1999) MR1741738

[6] D. Dolgopyat, V. Kaloshin and L. Koralov: A Limit Shape Theorem For Periodic Stochastic

Dispersion,Comm. Of Pure And App. Math., (2002) MR2059676

[7] G. Dimitroff: PhD thesis,Technische Universität Berlin, (2006)

[8] P. Imkeller and M. Scheutzow, On The spatial Asymptotic Behaviour Of Stochastic Flows In

Euklidean Space,Ann. Probab27, 109-129, (1999) MR1681138

[9] H. Kunita: Stochastic Flows and Stochastic Differential Equations,Cambridge University Press,

(1990) MR1070361

[10] Y. Le Jan: On Isotropic Brownian Motions, Z. Wahrscheinlichkeitstheorie verw. Gebiete 70,

609-620, (1985) MR0807340

[11] H. Lisei and M. Scheutzow: On The Dispersion Of Sets Under The Action Of An Isotropic

Brownian Flow,Probabilistic Methods In Fluids, Proceedings Of The Swansea 2002 Workshop,

Editor I.M. Davies, 224-238, (2003) MR2083375

[12] H. Lisei and M. Scheutzow: Linear Bounds And Gaussian Tails In A Stochastic Dispersion

Model,Stochastics And Dynamics1, 389-403, (2001) MR1859014

[13] P.-D. Liu and M. Qian, Smooth Ergodic Theory of Random Dynamical Systems,Lecture Notes

in Mathematics1606, Chapter II, (1995) MR1369243

[14] P. E. Ricci, Improving the Asymptotics for the Greatest Zeros of Hermite Polynomials,

Com-puters Math. Appl., Vol30, pp. 409-416, (1995) MR1343062

[15] G. Sansone and R. Conti: Non-Linear Differential Equations,Perg. Press, (1964) MR0177153

[16] M. Scheutzow: Chaining Techniques and their Application to Stochastic Flows Trends in

Stochastic Analysis, LMS Lecture Note Series 353, Cambridge Univ. Press, 35-63 (2009)

[17] M. Scheutzow and D. Steinsaltz: Chasing Balls Through Martingale Fields,Ann. Probab.30,

2046-2080, (2002) MR1944015

[18] A. M. Yaglom: Some classes of random fields in n-dimensional space, related to stationary

Referensi

Dokumen terkait

Pada hari ini Rabu tanggal Lima bulan Juli tahun Dua Ribu Tujuh Belas (Rabu, 05-07-2017) , bertempat di ruang Pertemuan Kantor Kesyahbandaran dan Otoritas

Paket pengadaan ini terbuka untuk penyedia barang/jasa yang teregistrasi pada Layanan Pengadaan Secara Elektronik (LPSE) dan memiliki Surat Izin Usaha Jasa Konstruksi

Lebih luas, problem yang membawa dimensi politik identitas juga pernah amat keras dialami bangsa Indonesia dalam beberapa kasus penting.. Kerusuhan etnis Madura-Dayak, kerusuhan

Alat-alat dan medium harus sudah steril baik permukaan maupun bagian dalamnya, untuk botol kultur yang berisi media, permukaannya disemprot atau dilap dengan alkohol 70%.. Untuk

Memberi wawasan tentang Fisika sebagai landasan perkembangan ilmu dan teknologi, melalui pengajaran konsep dasar serta proses ilmiah Fisika, terutama

Sehubungan dengan adanya kesalahan dalam penginputan data pada Pengumuman Lelang non e-proc yang telah ditayangkan khususnya untuk POKJA KONSTRUKSI, maka bersama

Cabai hias F1 yang memiliki tindak gen incomplate dominance yaitu karakter diameter batang, lebar daun, diameter buah, bobot buah, tebal daging buah, jumlah

Penelitian bertujuan untuk mengetahui faktor-faktor yang berhubungan dengan kesediaan ibu bersalin untuk pemasangan IUD pada kala IV persalinan di Klinik Bersalin di Kecamatan