• Tidak ada hasil yang ditemukan

getdocd39e. 151KB Jun 04 2011 12:05:01 AM

N/A
N/A
Protected

Academic year: 2017

Membagikan "getdocd39e. 151KB Jun 04 2011 12:05:01 AM"

Copied!
11
0
0

Teks penuh

(1)

ELECTRONIC COMMUNICATIONS in PROBABILITY

A NOTE ON DIRECTED POLYMERS IN GAUSSIAN ENVIRONMENTS

YUEYUN HU

Département de Mathématiques, Université Paris XIII, 99 avenue J-B Clément, 93430 Villetaneuse, France

email: [email protected]

QI-MAN SHAO

Department of Mathematics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China

email: [email protected]

SubmittedMay 28, 2009, accepted in final formSeptember 19, 2009 AMS 2000 Subject classification: Primary 60K37

Keywords: Directed polymer, gaussian environment.

Abstract

We study the problem of directed polymers in gaussian environments inZd from the viewpoint of a gaussian family indexed by the set of random walk paths. In the zero-temperature case, we give a numerical bound on the maximum of the Hamiltonian, whereas in the finite temperature case, we establish an equivalence between thevery strong disorderand the growth rate of the entropy associated to the model.

1

Introduction and main results

1.1

Finite temperature case

Let (g(i,x))i≥0,xZd be i.i.d. standard real-valued gaussian variables. We denote byPandEthe corresponding probability and expectation with respect to g(·,·). Let {Sk,k 0} be a simple symmetric random walk onZd, independent of g(·,·). We denote byPx the probability measure of(Sn)n∈N starting atx∈Zd and byEx the corresponding expectation. We also writeP=P0and

E=E0.

The directed polymer measure in a gaussian random environment, denoted by〈·〉(n), is a random

probability measure defined as follows: LetΩnbe the set of nearest neighbor paths of length n:

ndef=

n

γ:{1, ...,n} →Zd,|γ

kγk−1|=1,k=2, . . . ,n,γ0=0

o

. For any functionF:ΩnR+,

F(S)(n)def= 1

ZnE

F(S)eβHn(g,S)−β

2n 2

, Hn(g,γ)

def =

n X

i=1

g(i,γi),

(2)

whereβ >0 denotes the inverse of temperature andZnis the partition function: Zn=Zn(β,g) =E

eβHn(g,S)−β

2n 2

.

We refer to Comets, Shiga and Yoshida[3]for a review on directed polymers. It is known (see e.g.[2],[3]) that the so-called free energy, the limit of1nlogZnexists almost surely and in L1:

p(β):= lim n→∞

1 nlogZn,

p(β)is some constant andp(β)0 by Jensen’s inequality sinceEZn=1.

A problem in the study of directed polymer is to determine the region of{β >0 :p(β)<0}, also called the region ofvery strong disorder. It is an important problem, for instance,p(β)<0 yields interesting information on the localization of the polymer itself.

By using the F-K-G inequality, Comets and Yoshida[4]showed the monotonicity ofp(β), there-fore the problem is to determine

βc:=inf{β >0 :p(β)<0}.

It has been shown by Imbrie and Spencer[8]that ford3,βc>0 (whose exact value remains unknown). Comets and Vargas[5]proved that (for a wide class of random environments)

βc=0, if d=1. (1.1)

Recently, Lacoin[10], skilfully used the ideas developed in pinned models and solved the problem in the two-dimensional case:

βc=0, if d=2. (1.2)

Moreover, Lacoin[10]gave precise bounds on p(β)when β 0 both in one-dimensional and two-dimensional cases.

In this note, we study this problem from the point of view of entropy (see also Birkner[1]). Let

en(β):=E

ZnlogZn

−(EZn)log(EZn) =E

ZnlogZn

be the entropy associated toZn(recallingEZn=1). Theorem1.1. Letβ >0. The following limit exits

˜e(β):= lim n→∞

en(β) n =ninf≥1

en(β)

n ≥0. (1.3)

There is some numerical constantcd>0, only depending ond, such that the following assertions are equivalent:

(a) ˜e(β)>0 . (b) lim sup

n→∞

en(β)

(3)

The proof of the implication(b)=(c)relies on a criterion ofp(β)<0 (cf. Fact 3.1 in Section 3) developed by Comets and Vargas[5]in a more general settings.

We can easily check(b)in the one-dimensional case: In fact, we shall show in the sequel (cf. (3.7)) that in any dimension and for anyβ >0,

en(β)≥

β2

2 E(Ln(S

1,S2)),

whereS1andS2are two independent copies ofSandLn(γ,γ′) = Pn

k=11(γk=γk)is the number of common points of two pathsγandγ′. It is well known thatE(Ln(S1,S2))is of order n1/2when d=1 and of order lognwhen d=2. Therefore(b)holds ind=1 and by the implication(b)

=(c), we recover Comets and Vargas’ result (1.1) in the one-dimensional gaussian environment case.

1.2

Zero temperature case

Whenβ→ ∞, the problem of directed polymers boils down to the problem of first-passage perco-lation. Let

Hn=Hn∗(g):=max γ∈Ωn

Hn(g,γ), Hn(g,γ) = n X

1

g(i,γi), where as beforeΩn={γ:[0,n]Zd,|γ

iγi−1|=1,i=2, . . . ,n,γ0=0}.

The problem is to characterize these pathsγwhich maximize Hn(g,γ). See Johansson [9]for the solution of the Poisson points case. We limit here our attention to some explicit bounds onHn. An easy subadditivity argument (see Lemma 2.2) shows that

Hn n → supn≥1

EHn n

def

=cd, botha.s. and inL1. By Slepian’s inequality ([12]),

EHnpnEmax γ∈Ωn

,

where()γ∈Ωn is a family of i.i.d. centered gaussian variables of variance 1. Since #Ωn= (2d) n, it is a standard exercise from extreme value theory that

1

pnEmax

γ∈Ωn

p

2 log(2d).

Hence

cdp2 log(2d).

It is a natural problem to ask whether this inequality is strict; In fact, a strict inequality means that the gaussian family{Hn(g,γ),γ∈Ωn} is sufficiently correlated to be significantly different from the independent one, exactly as the problem to determine whetherp(β)<0.

We prove that the inequality is strict by establishing a numerical bound: Theorem1.2. For anyd≥1, we have

cd r

2 log(2d)(2d−1)

2d

12d−1 5πd

(1Φ(plog(2d))),

whereΦ(x) = p1 2π

Rx

−∞e

u2/2

(4)

2

Proof of Theorem 1.2

We begin with several preliminary results. Recall at first the following concentration of measure property of Gaussian processes (see Ibragimov and al.[7]).

Fact2.1. Consider a functionF :RMRand assume that its Lipschitz constant is at mostA, i.e.

|F(x)F(y)| ≤A||xy|| (x,yRM),

Lemma2.2. There exists some positive constantcdsuch that Hn

Proof:We prove at first the concentration inequality. Define a functionF:RmRby

F(z) =max Gaussian concentration inequality Fact 2.1, we get (2.2).

Now we prove that nEHn∗ is superadditive: for n,k ≥ 1, let γ

which in view of concentration (2.2) implies (2.1). ƒ

(5)

Define

φn(λ) =log X x-n

EeλHn∗,x !

, φn∗(a) =sup λ>0

φn(λ).

Lemma2.3. For anyn1, Letζndef= inf{c>0 :φn(cn)>0}. We have

cdζncd∗+2 r

dlog(2n+1)

n .

Proof:Letτn,xbe the time and space shift on the environment:

gτn,x(·,·) =g(n+·,x+·). (2.3) We have for anyn,k,

Hn+k=max x-n

n max γ∈Ωn:γn=x

Hn(g,γ) +Hk∗(gτn,x) o

.

Write for simplificationHn,x:=maxγ∈Ωn:γn=xHn(g,γ). Then for anyλ∈R, EeλHn+k = Eeλmaxx-n(Hn,x+Hk(gτn,x))

E X

x-n

eλHn,xeλHk∗(gτn,x) !

= E X

x-n eλHn,x

! EeλHk∗.

We get

EeλHjn ejφn(λ), j,n1,λR. Chebychev’s inequality implies that

PHjn>c jn

ejφn∗(cn), c>0,

whereφ

n(a) =supλ>0 φn(λ)

. Then for anynandcsuch thatφ

n(cn)>0, lim sup j→∞

Hjn jnc, a.s. It follows that

cdζn, n1.

On the other hand, by using the concentration inequality (2.2) and the fact thatE(Hn)ncd, we get

EeλHneλEHneλ2n/2≤eλncd+λ

2n/2

, λ >0.

It follows that for anyλ >0,

φn(λ)logX x-n

EλHnλncd+

λ2

2 n+2dlog(2n+1).

Choosingλ=2 q

dlog(2n+1)

n , we getφn(λ)< λn(c

d+a)for anya>2 q

dlog(2n+1)

(6)

Proof of Theorem 1.2:Letn=2 in Lemma 2.3, we want to estimateφ2(λ).

Letg,g0,g1,g2,· · · be iid standard gaussian variables. Observe that the possible choices ofγ1are

(±1, 0,· · ·, 0),(0,±1, 0,· · ·, 0),· · · ,(0,· · ·, 0,±1), and the possible choices ofγ2are(0, 0,· · ·, 0),

(±2, 0,· · ·, 0),(±1,±1, 0· · ·, 0),(±1, 0,±1, 0,· · ·, 0),· · ·,(±1, 0,· · ·, 0,±1),· · ·,(0,±1,±1, 0,· · ·, 0),

· · ·,(0,· · ·,±1,±1). Therefore X

x-2

EeλH∗2,x

= Eeλ(g0+max1≤i≤2dgi)+2dEeλ(g0+g1)+4(d1+d2+· · ·+1)Eeλ(g0+max(g1,g2))

= Eeλ(g0+max1≤i≤2dgi)+2dEeλ(g0+g1)+2d(d1)Eeλ(g0+max(g1,g2)) ≤

d X

i=1

Eeλ(g0+max(g2i−1,g2i))+2d Eeλ(g0+g1)+2d(d1)Eeλ(g0+max(g1,g2))

= 2dEeλ(g0+g1)+d(2d−1)Eeλ(g0+max(g1,g2))

= 2d eλ2+d(2d1)2/2Eeλmax(g1,g2)

= 2d eλ2+d(2d1)2/22eλ2/2Φ(λ/p2)

= 2(2d+2d(2d1)Φ(λ/p2)), (2.4)

where we use the fact that

Eeλmax(g1,g2)=2eλ2/2Φ(λ/p2).

In fact, since max(g1,g2) = (1/2)(g1+g2+|g1−g2|)andg1+g2andg1−g2are independent,

we have

Eeλmax(g1,g2)=Eeλ(g1+g2+|g1−g2|)/2=eλ2/4Eeλ|g1|/

p

2=2eλ2/2

Φ(λ/p2),

where we use

Eeλ|g|=p2

2π

Z∞

0

eλxx2/2d x= 2e

λ2/2 p

2π

Z∞

0

e−(xλ)2/2d x=2eλ2/2Φ(λ). We conclude from (2.4) that

φ2(λ) λ2+log2d+2d(2d1)Φ(λ/p2)

= λ2+2 log(2d) +log1(2d−1)

2d (1−Φ(λ/

p

2))

λ2+2 log(2d)(2d−1)

2d (1−Φ(λ/

p

2))def=h(λ)

Now consider the function

h∗(2c):=sup λ>0

(2cλh(λ)), c>0.

Clearlyh(2c)φ(2c)for anyc>0. Let us study theh(2c): The maximal achieves when

2c=h′(λ)

That is

2c=2λ+(2d−1)

2dp2πe

(7)

Now choosecso that

2cλ=h(λ) =λ2+2 log(2d)(2d−1)

2d (1−Φ(λ/

p

2)) (2.5)

Let

a= (2d−1)

4dp2π e

λ2/4

, b=(2d−1)

2d (1−Φ(λ/

p

2)).

Then

2c=2λ+2a, 2cλ=λ2+2 log(2d)b,

which gives

λ2+2aλ=2 log(2d)b or

c2= (λ+a)2=2 log(2d)b+a2. It is easy to see that

(1−Φ(t)) 5

16e

t2

, fort>0 Hence

16 5

2d1

4dp2π

2

(1Φ(λ/p2))a2 and

c2 = 2 log(2d)(2d−1)

2d (1−Φ(λ/

p

2)) +a2

≤ 2 log(2d)

(2d1)

2d −

2d1

4dp2π

16

5

(1Φ(λ/p2))

= 2 log(2d)(2d−1)

2d

12d−1 5πd

(1Φ(λ/p2))

≤ 2 log(2d)(2d−1)

2d

12d−1 5πd

(1Φ(plog(2d)))def=ec2,

Here in the last inequality, we used the fact thatλp2 log(2d). Recall(c,λ)satisfying (2.5). For anyǫ >0,φ2∗(2(ec+ǫ))h∗(2(ec+ǫ))2(c+ǫ)λh(λ) =2ǫλ >0. It followsζ2ec+ǫ and

hencecdec. ƒ

3

Proof of Theorem 1.1

Let

Zm(x):=E

1(Sm=x)eβHm(g,S)−β 2m/2

, m≥1,x∈Zd. Fact3.1 (Comets and Vargas[5]). If there exists somem≥1 such that

E X x

Zm(x)logZm(x)

≥0, (3.1)

(8)

In fact, the caseE PxZm(x)logZm(x)

=0 follows from their Remark 3.5 in Comets and Vargas

(2006), whereas ifE PxZm(x)logZm(x)

>0, which means the derivative ofθ EPxZmθ(x)

at 1 is positif, hence for someθ <1,EPxZmθ(x)<1 and again by Comets and Vargas (2006) (lines before Remark 3.4), we havep(β)<0.

We try to check (3.1) in the sequel:

Lemma3.2. There exists some constantcd>0 such that for anym1, E X

where the probability measureQ=Q(β)is defined by

dQ|Fg some positive constantc

d. Assembling all the above, we get a constantcd>0 such that E X

x

Zm(x)logZm(x)QlogZmcdlogm

for allm1, as desired. ƒ

Let µ be a Gaussian measure onRm . The logarithmic Sobolev inequality says (cf. Gross[6], Ledoux[11]): for any f :RmR,

Using the above inequality, we have

Lemma3.3. LetS1andS2be two independent copies ofS. We have

(9)

where〈Ln(S1,S2)〉

(n)

2 =

1

Z2

n

E eβHn(g,S1)+βHn(g,S2)−β2nLn(S1,S2). Consequently,

en(β)≥

β2

2 E(Ln(S

1,S2)). (3.7)

Proof:Taking

f(z) =

v u u

tEexp βXn i=1

X

x

z(i,x)1(Si=x)−

β2

2 n !

, z= (z(i,x), 1in,x-i).

Note that Zn = f2(g)with g= (g(i,x), 1 i n,x - i). Applying the log-Sobolev inequality

yields the first estimate (3.5).

The another assertion follows from the integration by parts: for a standard gaussian variable g and any derivable functionψsuch that both(g)andψ(g)are integrable, we have

E((g)) =E(ψ′(g)).

Elementary computations based on the above formula yield (3.6). The details are omitted. From (3.5) and (3.6), we deduce that the functionβ en(β)

β2 is nondecreasing. On the other hand, it is

elementary to check that lim β→0

en(β)

β2 =

1 2E(Ln(S

1,S2)), which gives (3.7) and completes the proof

of the lemma. ƒ

IfPandQare two probability measures on(Ω,F), the relative entropy is defined by

H(Q|P)def=

Z logdQ

d PdQ,

where the expression has to be understood to be infinite ifQ is not absolutely continuous with respect toPor if the logarithm of the derivative is not integrable with respect toQ. The following entropy inequality is well-known:

Lemma3.4. For anyA∈ F, we have

logP(A) Q(A) ≥ −

H(Q|P) +e−1 Q(A) .

This inequality is useful only ifQ(A)1. Recall (3.3) for the definition ofQ. Note that for any

δ >0,Q

Zn δ

1+δ

1+1δ, it follows that

PZn δ 1+δ

≥ 1

1+δe

−(1+δ)e−1

exp(1+δ)en(β). (3.8) Now we give the proof of Theorem 1.1:

Proof of Theorem 1.1: We prove at first (1.3) by subadditivity argument: Recalling (3.2) and (2.3). By markov property ofS, we have that for alln,m1,

Zn+m=Zn X

x

(10)

Letψ(x) =xlogx. We have

Zn+mlogZn+m = ψ(Zn) X

x

un(x)Zm(x,gτn,x) +Znψ X

x

un(x)Zm(x,gτn,x)

!

ψ(Zn)

X

x

un(x)Zm(x,gτn,x) +Zn X

x

un(x)ψ(Zm(x,gτn,x)),

by the convexity ofψ. Taking expectation gives (1.3).

To show the equivalence between(a), (b), (c), we remark at first that the implication(b)(c) follows from Lemma 3.2 and (3.1). It remains to show the implication(c)(a). Assume that p(β)<0. The superadditivity says that

p(β) =sup n≥1

pn(β), withpn(β):=1

nE(logZn). On the other hand, the concentration of measure (cf.[2]) says that

P1

nlogZnpn(β) >u

≤exp nu

2

2β2

, u>0. It turns out forα >0,

E

Znα

= E(logZn)(logZnE(logZn))

E(logZn)2β2n/2

eαp(β)n+α2β2n/2.

By choosingα=p(β)2(note thatα >0), we deduce from the Chebychev’s inequality that

PZn>1

2

≤2αep2(β)n/(2β2),

which in view of (3.8) withδ=1 imply that lim infn→∞1nen(β)≥ p(β)2

4β2 . ƒ

Acknowledgements Cooperation between authors are partially supported by the joint French-Hong Kong research program (Procore, No.14549QH and F-HK22/06T). Shao’s research is also partially supported by Hong Kong RGC CERG 602608.

References

[1] Birkner, M.: A condition for weak disorder for directed polymers in random environment. Electron. Comm. Probab.9 (2004), 22–25. MR2041302

[2] Carmona, Ph. and Hu, Y.: On the partition function of a directed polymer in a Gaus-sian random environment. Probab. Theory Related Fields 124 (2002), pp. 431–457. MR1939654

(11)

[4] Comets, F. and Yoshida, N.: Directed polymers in random environment are diffusive at weak disorder.Ann. Probab.34 (2006), no. 5, 1746–1770. MR2271480

[5] Comets, F. and Vargas, V.: Majorizing multiplicative cascades for directed polymers in random media.ALEA Lat. Am. J. Probab. Math. Stat.2 (2006), 267–277 MR2249671

[6] Gross, L.: Logarithmic Sobolev inequalities.Amer. J. Math. 97 (1975), no. 4, 1061–1083. MR0420249

[7] Ibragimov,I.A., Sudakov, V. and Tsirelson, B.: Norms of Gaussian sample functions, Pro-ceedings of the Third Japan-USSR Symposium on Probability (Tashkent), Lecture Notes in Mathematics, vol. 550, 1975, pp. 20–41. MR0458556

[8] Imbrie, J.Z. and Spencer,T.: Diffusion of directed polymers in a random environment, Jour-nal of Statistical Physics52(1988), 608–626. MR0968950

[9] Johansson, K. Transversal fluctuations for increasing subsequences on the plane.Probab. Theory Related Fields 116 (2000), no. 4, 445–456. MR1757595

[10] Lacoin, H. New bounds for the free energy of directed polymers in dimension 1+1 and 1+2.Arxiv arXiv:0901.0699

[11] Ledoux, M.: Concentration of measure and logarithmic Sobolev inequalities.Sém. Probab., XXXIII120–216, Lecture Notes in Math., 1709, Springer, Berlin, 1999. MR1767995

Referensi

Dokumen terkait

allows a given property if there exists a matrix of the pattern with that property (e. we are considering pairs of patterns that allow commutativity); a pattern requires.. a

It is very similar to the proof of Lemma 4 in Kesten (1987), which deals with the probability of four disjoint paths to ∆S(n), two occupied ones and two vacant ones, with the

The above strategy was adopted earlier by one of the authors for branching random walk in random environment [ 10 ]. There, the Markov chain alluded to above is simply the product

A classical result in the theory of continuous-time stationary Gaussian processes gives sufficient conditions for the equivalence of the laws of two centered processes with

The asymptotic convergence of paths given by the asip (log) then allows these results to pass over from the self-similar processes to the coordinate-changed random walk, by Lemma 6.1

We prove an almost sure limit theorem on the exact convergence rate of the maximum of standardized gaussian random walk increments.. On a conjecture of R´

If each of the other three dikes of that neighbouring polder is higher than a, the water in the polder of O will remain at level a until the above mentioned neighboring polder

We will prove that if a limit measure is not absolutely continuous with respect to the Lebesgue measure then the corresponding random walk on the self similar graph does not have