• Tidak ada hasil yang ditemukan

getdoc9f87. 358KB Jun 04 2011 12:04:46 AM

N/A
N/A
Protected

Academic year: 2017

Membagikan "getdoc9f87. 358KB Jun 04 2011 12:04:46 AM"

Copied!
52
0
0

Teks penuh

(1)

E l e c t ro n ic

J o u r n

a l o

f P

r o b

a b i l i t y

Vol. 10 (2005), Paper no. 15, pages 525-576. Journal URL

http://www.math.washington.edu/ejpecp/

Logarithmic Sobolev Inequality for Zero–Range Dynamics: Independence of the Number of Particles

Paolo Dai Pra

Dipartimento di Matematica Pura e Applicata, Universit`a di Padova Via Belzoni 7, 35131 Padova, Italy

e-mail: daipra@math.unipd.it

and

Gustavo Posta

Dipartimento di Matematica, Politecnico di Milano Piazza L. Da Vinci 32, 20133 Milano, Italy

e-mail: guspos@mate.polimi.it

Abstract

We prove that the logarithmic-Sobolev constant for Zero-Range Processes in a box of diameterLmay depend onLbut not on the number of particles. This is a first, but relevant and quite technical step, in the proof that this logarithmic-Sobolev constant grows asL2, that will be presented in a forthcoming paper ([3]).

Keywords and phrases: Logarithmic Sobolev Inequality, Zero-Range Processes. AMS 2000 subject classification numbers: 60K35, 82C22.

Abbreviated title: log-Sobolev inequality for Zero-Range.

(2)

1

Introduction

The zero-range process is a system of interacting particles moving in a discrete lattice Λ, that here we will assume to be a subset of Zd. The interaction is “zero range”, i.e. the

motion of a particle may be only affected by particles located at the same lattice site. Let

c : N [0,+) be a function such that c(0) = 0 and c(n) > 0 for every n > 0. In

the zero-range process associated to c(·) particles evolve according to the following rule: for each site x Λ, containing ηx particles, with probability rate c(ηx) one particle jumps

fromx to one of its nearest neighbors at random. Waiting jump times of different sites are independent. If c(n) = λn then particles are independent, and evolve as simple random walks; nonlinearity of c(·) is responsible for the interaction. Note that particles are neither created nor destroyed. When Λ is a finite lattice, for each N ≥1, the zero-range process in Λ restricted to configurations with N particles is a finite irreducible Markov chain, whose unique invariant measure νN

Λ is proportional to Y

x∈Λ 1

c(ηx)!

, (1.1)

where

c(n)! = ½

1 for n= 0

c(n)c(n1)· · ·c(1) otherwise. Moreover the process is reversible with respect toνN

Λ.

If the function n 7→ c(n) does not grow too fast in n, then the zero range process can be defined in the whole lattice Zd. In this case the extremal invariant measures form a one

parameter family of uniform product measures, with marginals

µρ[ηx =k] =

1

Z(α(ρ))

α(ρ)k

c(k)!, (1.2)

whereρ≥0,Z(α(ρ)) is the normalization, andα(ρ) is uniquely determined by the condition

µρ[ηx] =ρ (we use here the notation µ[f] for

R

f dµ).

(3)

• due to unboundedness of the density of particles various arguments used for exclusion processes fail; in principle the rate of relaxation to equilibrium may depend on the number of particles, as it actually does in some cases;

• there is no small parameter in the model; one would not like to restrict to “small” perturbations of a system of independent particles.

In [5] a first result for zero-range processes has been obtained. Let EνN

Λ be the Dirichlet form associated to the zero-range process in Λ = [0, L]dZd with N particles. Then, under

suitable growth conditions onc(·) (see Section 2), the following Poincar´e inequality holds

νΛN[f, f]CL2EνN

Λ(f, f), (1.3)

where C may depend on the dimension d but not on N or L. Moreover, by suitable test functions, the Ldependence in (1.3) cannot be improved, i.e. one can find a positive constantc >0 and functionsf =fN,L so that νΛN[f, f]≥cL2EνN

Λ(f, f) for allL, N. In other terms, (1.3) says that the spectral gap ofEνN

Λ shrinks proportionally to 1

L2, independently of the number of particles N. It is well known that Poincar´e inequality controls convergence to equilibrium in theL2(νN

Λ)-sense: if (StΛ)t≥0 is the Markov semigroup corresponding to the process, then for every functionf

νΛN h¡

StΛf −νΛN[f] ¢2i

≤exp µ

− 2t

CL2 ¶

νΛN £

(f−νΛN[f])2 ¤

. (1.4)

Poincar´e inequality is however not sufficient to control convergence in stronger norms, e.g. in total variation, that would follow from the logarithmic-Sobolev inequality

EntνN

Λ(f)≤s(L, N)EνΛN( p

f ,pf), (1.5)

where EntνN

Λ(f) = ν

N

Λ[flogf]−νΛN[f] logνΛN[f]. The constant s(L, N) in (1.5) is intended to be the smallest possible, and, in principle, may depend on both L and N.

Our main aim is to prove that s(L, N) ≤ CL2 for some C > 0, i.e. the logarithmic-Sobolev constant scales as the inverse of the spectral gap. This is the first conservative system with unbounded particle density for which this scaling is established (see comments on page 423 of [4]). It turns out that the proof of this result is very long and technical, and it roughly consists in two parts. In the first part one needs to show that

s(L) := sup

N≥1

s(L, N)<+, (1.6)

i.e. that s(L, N) has an upper bound independent of N, while in the second part a sharp induction inL is set up to prove the actual L2 dependence. For this induction to work one has to choose L sufficiently large as a starting point, and for this L an upper bound for

s(L, N) independent ofN has to be known in advance. Note that for models with bounded particle density inequality (1.6) is trivial.

This paper is devoted to the proof of (1.6), while the induction leading to the L2 growth is included in [3]. The proof of (1.6) is indeed very long, and relies on quite sharp estimate on the measureνN

(4)

2

Notations and Main result

Throughout this paper, for a given probability space (Ω,F, µ) and f : Ω R measurable,

we use the following notations for mean value and covariance:

µ[f] := Z

f dµ, µ[f, g] := µ[(f−µ[f])(g−µ[g])]

and, for f ≥0,

Entµ(f) :=µ[flogf]−µ[f] logµ[f],

where, by convention, 0 log 0 = 0. Similarly, forGa sub-σ-field of F, we let µ[f|G] to denote the conditional expectation,

µ[f, g|G] := µ[(fµ[f|G])(gµ[g|G])|G] the conditional covariance, and

Entµ(f|G) :=µ[flogf|G]−µ[f|G] logµ[f|G]

the conditional entropy.

If A ⊂ Ω, we denote by 1(x ∈ A) the indicator function of A. If B ⊂ A is finite we will write B ⊂⊂ A. For any x R we will write x := sup{n Z : n x} and

⌈x:= inf{n Z:n x}.

Let Λ be a possibly infinite subset ofZd, and ΩΛ =be the corresponding configuration

space, where N ={0,1,2, . . .} is the set of natural numbers. Given a configuration η ΩΛ

and x Λ, the natural number ηx will be referred to as the number of particles at x.

Moreover if Λ′ Λ η

Λ′ will denote the restriction of η to Λ′. For two elements σ, ξ ∈ ΩΛ,

the operations σ±ξ are defined componentwise (for the difference whenever it returns an element of ΩΛ). In what follows, given x Λ, we make use of the special configuration δx,

having one particle atx and no other particle. For f : ΩΛ →R and x, y Λ, we let

∂xyf(η) := f(η−δx+δy)−f(η).

Consider, at a formal level, the operator

LΛf(η) := X x∈Λ

X

y∼x

c(ηx)∂xyf(η), (2.1)

where y x means |xy| = 1, and c : N R+ is a function such that c(0) = 0 and

inf{c(n) :n > 0} >0. In the case of Λ finite, for each N ∈N\ {0}, LΛ is the infinitesimal

generator of a irreducible Markov chain on the finite state space{η ΩΛ:ηΛ =N}, where

ηΛ :=X

x∈Λ

ηx

is the total number of particles in Λ. The unique stationary measure for this Markov chain is denoted by νN

Λ and is given by

νΛN[{η}] := 1

ZN

Λ Y

x∈Λ 1

c(ηx)!

(5)

wherec(0)! := 1, c(k)! :=c(1)· · · · ·c(k), fork > 0, andZN

Λ is the normalization factor. The measure νN

Λ will be referred to as the canonical measure. Note that the system is reversible for νN

Λ, i.e. LΛ is self-adjoint in L2(νΛN) or, equivalently, the detailed balance condition

c(ηx)νΛN[{η}] =c(ηy + 1)νΛN[{η−δx+δy}] (2.3)

holds for every xΛ and ηΩΛ such that ηx>0.

Our main result, that is stated next, will be proved under the following conditions. Condition 2.1 (LG)

sup

k∈N|

c(k+ 1)−c(k)|:=a1 <+∞.

As remarked in [5] for the spectral gap,N-independence of the logarithmic-Sobolev constant requires extra-conditions; in particular, our main result would not hold true in the case

c(k) = c1(k ∈ N\ {0}). The following condition, that is the same assumed in [5], is a

monotonicity requirement on c(·) that rules out the case above.

Condition 2.2 (M) There exists k0 > 0 and a2 > 0 such that c(k)−c(j) ≥ a2 for any

j ∈N and k j+k0.

A simple but key consequence of conditions above is that there exists A0 >0 such that

A−1

0 k≤c(k)≤A0k for any k ∈N. (2.4) In what follows, we choose Λ = [0, L]d∩Zd. In order to state our main result, we define

the Dirichlet form corresponding to LΛ and νN

Λ:

EνN

Λ(f, g) = −ν

N

Λ[fLΛg] = 1 2

X

x∈Λ X

y∼x

νΛN[c(ηx)∂xyf(η)∂xyg(η)]. (2.5)

Theorem 2.1 Assume that conditions (LG) and (M) hold. Then there exists a constant

C(L)>0, that may only depend ona1, a2, the dimension d andL, such that for every choice of N ≥1, L≥2 and f : ΩΛ→R, f >0, we have

EntνN

Λ(f)≤C(L)EνΛN ³p

f ,pf´. (2.6)

3

Outline of the proof

(6)

3.1

Step 1: duplication

The idea is to prove Theorem 2.1 by induction on|Λ|. Suppose|Λ|= 2L, so that Λ = Λ1∪Λ2, |Λ1| =|Λ2| =L, where Λ1,Λ2 are two disjoint adjacent segments in Z. By a basic identity

on the entropy, we have

EntνN

Λ(f) =ν

N

Λ h

EntνN

Λ[·|ηΛ1](f) i

+ EntνN Λ(ν

N

Λ[f|ηΛ1]). (3.1) Note that νΛN[·|ηΛ1] = ν

ηΛ1

Λ1 ⊗ν

N−ηΛ1

Λ2 . Thus, by the tensor property of the entropy (see [1], Th. 3.2.2):

νΛN hEntνN

Λ[·|ηΛ1](f) i

≤νΛN

· Ent

νηΛ1

Λ1

(f) + Ent

νN−ηΛ1

Λ2

(f) ¸

. (3.2)

Now, let s(L, N) be the maximum of the logarithmic-Sobolev constant for the zero-range process in volumes Λ with |Λ| ≤ L and less that N particles, i.e. s(L, N) is the smallest constant such that

Entνn

Λ(f)≤s(L, N)EνΛn( p

f ,pf).

for all f >0, |Λ| ≤Land n ≤N. Then, by (3.2),

νΛNhEntνN

Λ[·|ηΛ1](f) i

≤ s(L, N)νΛN

·

E νηΛ1

Λ1

(pf ,pf) +E νN−ηΛ1

Λ2

(pf ,pf) ¸

= s(L, N)EνN Λ(

p

f ,pf). (3.3)

Identity (3.1) and inequality (3.3) suggest to estimate s(L, N) by induction on L. The hardest thing is to make appropriate estimates on the term EntνN

Λ(ν

N

Λ[f|ηΛ1]). Note that this term is the entropy of a function depending only on the number of particles in Λ1.

3.2

Step 2: logarithmic Sobolev inequality for the distribution of

the number of particles in Λ

1

Let

γΛN(n) =γ(n) :=νΛN[ηΛ1 =n].

γ(·) is a probability measure on{0,1, . . . , N}that is reversible for the birth and death process with generator

Aϕ(n) =

·

γ(n+ 1)

γ(n) ∧1 ¸

(ϕ(n+ 1)−ϕ(n)) + ·

γ(n1)

γ(n) ∧1 ¸

(ϕ(n−1)−ϕ(n)) (3.4)

and Dirichlet form

D(ϕ, ϕ) =−hϕ,AϕiL2(γ) =

N

X

n=1

[γ(n)γ(n1)](ϕ(n)ϕ(n1))2.

(7)

Proposition 3.1 The Markov chain with generator (3.4) has a logarithmic Sobolev constant proportional to N, i.e. there exists a constant C >0 such that for all ϕ 0

Entγ(ϕ)≤CND(√ϕ,√ϕ).

We now apply Proposition 3.1 to the second summand of the r.h.s. of (3.1), and we obtain

EntνN Λ(ν

N

Λ[f|ηΛ1])≤CN

N

X

n=1

[γ(n)γ(n1)] ·q

νN

Λ[f|ηΛ1 =n]− q

νN

Λ[f|ηΛ1 =n−1] ¸2

≤CN

N

X

n=1

γ(n)γ(n1)

νN

Λ[f|ηΛ1 =n]∨νΛN[f|ηΛ1 =n−1] ¡

νΛN[f|ηΛ1 =n]νΛN[f|ηΛ1 =n1]¢2, (3.5)

where we have used the inequality (√x−√y)2 (x−y)2

x∨y ,x, y >0.

3.3

Step 3: study of the term

ν

ΛN

[

f

|

η

Λ1

=

n

]

ν

ΛN

[

f

|

η

Λ1

=

n

1]

One of the key points in the proof of Theorem 2.1 consists in finding the “right” representa-tion for the discrete gradient νN

Λ[f|ηΛ1 =n]−νΛN[f|ηΛ1 =n−1], that appears in the r.h.s. of (3.5).

Proposition 3.2 For every f and every n= 1,2, . . . , N we have

νΛN[f|ηΛ1 =n]−νΛN[f|ηΛ1 =n−1] =

γ(n1)

γ(n) 1

nL

  νΛN

 

X

x∈Λ1 y∈Λ2

h(ηx)c(ηy)∂yxf

¯ ¯ ¯ ¯

¯ηΛ1 =n−1   

ΛN

  f,X

x∈Λ1 y∈Λ2

h(ηx)c(ηy)

¯ ¯ ¯ ¯

¯ηΛ1 =n−1   

 

, (3.6)

where

h(n) := n+ 1

c(n+ 1).

Moreover, by exchanging the roles ofΛ1 andΛ2, the r.h.s. of (3.6) can be equivalently written as, for every n−0,1, . . . , N −1,

− γ(N −n)

γ(N −n+ 1)

1 (N −n+ 1)L

  νΛN

 

X

x∈Λ1 y∈Λ2

h(ηy)c(ηx)∂xyf

¯ ¯ ¯ ¯

¯ηΛ1 =n   

ΛN

  f,X

x∈Λ1 y∈Λ2

h(ηy)c(ηx)

¯ ¯ ¯ ¯

¯ηΛ1 =n   

 

(8)

The representations (3.6) and (3.7) will be used for n ≥ N

2 and n <

N

2 respectively. For convenience, we rewrite (3.6) and (3.7) as

νΛN[f|ηΛ1 =n]νΛN[f|ηΛ1 =n1] =:A(n) +B(n), (3.8) where

A(n) :=      

    

γ(n−1)

γ(n) 1

nLν N

Λ "

P x∈Λ1 y∈Λ2

h(ηx)c(ηy)∂yxf

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

forn N

2

−γ(γN(N−−n+1)n) 1 (N−n+1)Lν

N

Λ "

P x∈Λ1 y∈Λ2

h(ηy)c(ηx)∂xyf

¯ ¯ ¯ ¯

¯ηΛ1 =n #

forn < N

2,

(3.9)

and

B(n) :=           

γ(n−1)

γ(n) 1

nLν N

Λ "

f,Px∈Λ1 y∈Λ2

h(ηx)c(ηy)

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

for n N

2

−γ(γN(N−−nn+1)) 1 (N−n+1)Lν

N

Λ "

f,Px∈Λ1

y∈Λ2h(ηy)c(ηx) ¯ ¯ ¯ ¯

¯ηΛ1 =n #

for n < N

2.

(3.10)

Thus, our next aim is to get estimates on the two terms in the r.h.s. of (3.8). It is useful to stress that the two terms are qualitatively different. Estimates on A(n) are essentially insensitive to the precise form ofc(·). Indeed, the dependence of A(n) on L andN is of the same order as in the casec(n) =λn, i.e. the case of independent particles. Quite differently, the term B(n) vanishes in the case of independent particles, since, in that case, the term P

x∈Λ1 y∈Λ2

h(ηx)c(ηy) is a.s. constant with respect to νΛN £

·|ηΛ1 =n1¤. Thus, B(n) somewhat depends on interaction between particles. Note that our model is not necessarily a “small perturbation” of a system of independent particles; there is no small parameter in the model that guarantees thatB(n) is small enough. Essentially all technical results of this paper are concerned with estimatingB(n).

3.4

Step 4: estimates on

A

(

n

)

The following proposition gives the key estimate on A(n). Proposition 3.3 There is a constant C > 0 such that

A2(n) CL 2

N

¡

νΛN[f|ηΛ1 =n]νΛN[f|ηΛ1 =n1]¢

× ·

γ(n1)

γ(n) EνΛN[·|ηΛ1=n−1]( p

f ,pf) +EνN

Λ[·|ηΛ1=n]( p

f ,pf) ¸

.

Remark 3.4 Let us try so see where we are now. Let us ignore, for the moment the term

(9)

¡

νΛN[f|ηΛ1 =n]νΛN[f|ηΛ1 =n1]¢2 CL 2

N

¡

νΛN[f|ηΛ1 =n]νΛN[f|ηΛ1 =n1]¢

× ·

γ(n1)

γ(n) EνΛN[·|ηΛ1=n−1]( p

f ,pf) +EνN

Λ[·|ηΛ1=n]( p

f ,pf) ¸

(3.11)

Inserting (3.11) into (3.5) we get, for some possibly different constant C, EntνN

Λ(ν

N

Λ(f|ηΛ1))≤CL2EνN Λ(

p

f ,pf), (3.12)

where we have used the obvious identity:

νΛNhEνN Λ[·|G](

p

f ,pf)i=EνN Λ(

p

f ,pf), (3.13)

that holds for any σ-field G. Inequality (3.12), together with (3.1) and (3.3) yields

s(2L, N)≤s(L, N) +CL2. (3.14) Thus, if we can show that

sup

N

s(2, N)<+∞, (3.15)

(see Proposition 3.7 next) then we would have sup

N

s(L, N)≤CL2

for some C > 0, i.e. we get the exact order of growth of s(L, N). In all this, however, we have totally ignored the contribution of B(n).

3.5

Step 5: preliminary analysis of

B

(

n

)

We confine ourselves to the analysis of B(n) for n ≥ N

2, since the case n <

N

2 is identical. Consider the covariance that appears in the r.h.s. of the first formula of (3.10). By elementary properties of the covariance and the fact thatνN

Λ[·|ηΛ1] =ν

ηΛ1

Λ1 ⊗ν

N−ηΛ1

Λ2 , we get

νΛN

  f,X

x∈Λ1 y∈Λ2

h(ηx)c(ηy)

¯ ¯ ¯ ¯ ¯ ¯ ¯

ηΛ1 =n1  

=νΛ1n−1

" X

x∈Λ1

h(ηx)νΛ2N−n+1 "

f, X

y∈Λ2

c(ηy)

##

+νn−1 Λ1

"

νN−n+1 Λ2 [f],

X

x∈Λ1

h(ηx)

#

νN−n+1 Λ2

" X

y∈Λ2

c(ηy)

#

. (3.16)

It follows by Conditions (LG) and (M) (see (2.4)) that, for some constantC >0, h(ηx)≤C

(10)

C >0

B2(n)≤ γ

2(n1)

γ2(n)  C

n2ν

N−n+1 Λ2

"

νΛ1n−1[f], X

y∈Λ2

c(ηy)

#2 +

+ C(N −n+ 1) 2

n2L2 ν

n−1 Λ1

"

νΛ2N−n+1[f],X

x∈Λ1

h(ηx)

#2

. (3.17)

Thus, our next aim is to estimate the two covariances in (3.17).

3.6

Estimates on

B

(

n

): entropy inequality and estimates on

mo-ment generating functions

By (3.17), estimating B2(n) consists in estimating two covariances. In general, covariances can be estimated by the followingentropy inequality, that holds for every probability measure

ν (see [1], Section 1.2.2):

ν[f, g] =ν[f(g−ν[g])]≤ ν[f]

t logν

£

et(g−ν[g])¤+ 1

t Entν(f), (3.18)

where f ≥ 0 and t > 0 is arbitrary. Since in (3.17) we need to estimate the square of a covariance, we write (3.18) with g in place of g, and obtain

|ν[f, g]| ≤ ν[f]

t log

¡

ν£et(g−ν[g])¤∨ν£e−t(g−ν[g])¤¢+1

t Entν(f). (3.19)

Therefore, we first get estimates on the moment generating functionsν£e±t(g−ν[g])¤, and then optimize (3.19) overt >0.

Note that the covariances in (3.17) involve functions of ηΛ1 orηΛ2. In next two proposi-tions we write Λ for Λ1 and Λ2, and denote byN the number of particles in Λ. Their proof can be found in [3].

Proposition 3.5 Let x Λ. Then there is a constant AL depending on L= |Λ| such that

for every N >0 and t[1,1]

νΛNhet(c(ηx)−νΛN[c(ηx)]) i

≤eALN t2, (3.20)

and

νΛNhetN(h(ηx)−νΛN[h(ηx)]) i

≤ALeAL(N t

2+N|t|)

, (3.21)

Using (3.18), (3.19) and (3.21) and optimizing overt >0, we get estimates on the covariances appearing in (3.17).

Proposition 3.6 There exists a constant CL depending on L=|Λ| such that the following

inequalities hold:

νΛN

"

f,X

x∈Λ

c(ηx)

#2

≤CLN νΛN[f] EntνN

(11)

νΛN

"

f,X

x∈Λ

h(ηx)

#2

≤CLN−1νΛN[f] h

νΛN[f] + EntνN Λ(f)

i

. (3.23)

Inserting these new estimates in (3.17) we obtain, for some possibly different CL,

B2(n) CLγ2(n−1)

γ2(n) νΛN[f|ηΛ1 =n−1] Ã

N−n+1

n2 νΛ1n−1 h

EntνN−n+1 Λ2 (f)

i

+(N−nn3+1)2 Ã

νN

Λ[f|ηΛ1 =n−1] +νΛ2N−n+1 h

Entνn Λ1(f)

i!!

.

(3.24)

In order to simplify (3.24), we use Proposition 7.5, which gives

n

C(N −n+ 1) ≤

γ(n1)

γ(n) ≤

Cn

N −n+ 1 (3.25)

for some C >0. It follows that

γ2(n1)

γ2(n)

N −n+ 1

n2 ≤

C2

N n+ 1 ≤

γ(n−1)

γ(n)

C3

n ,

and

γ2(n1)

γ2(n)

(N −n+ 1)2

n3 ≤

C2

n .

Thus, (3.24) implies, for some CL>0 depending on L, recalling also that n≥ N2,

N γ(n)∧γ(n−1) νN

Λ[f|ηΛ1 =n]∨νΛN[f|ηΛ1 =n−1]

B2(n)CLγ(n−1)

³

νΛN[f|ηΛ1 =n1]

Λ1n−1hEntνN−n+1 Λ2 (f)

i

Λ2N−n+1hEntνn Λ1(f)

i ´

. (3.26) Now, we bound the two terms

Entνn−1

Λ1 (f) and EntνΛ2N−n+1(f) by, respectively,

s(N, L)Eνn−1 Λ1 (

p

f ,pf) and s(N, L)EνN−n+1 Λ2 (

p

f ,pf),

and insert these estimates in (3.6). What comes out is then used to estimate (3.5), after having obtained the corresponding estimates for n < N2. Recalling the estimates for A(n), straightforward computations yield

EntνN Λ(ν

N

Λ[f|ηΛ1])≤CLEνN Λ(

p

f ,pf) +CLνΛN[f] +CLs(N, L)EνN Λ(

p

f ,pf), (3.27) for someCL>0 depending onL. Inequality (3.27), together with (3.3), gives, for a possibly

different constant CL >0

EntνN

Λ(f)≤CLEνΛN( p

f ,pf) +CLνΛN[f] +CLs(N, L)EνN Λ(

p

(12)

To deal with the term νN

Λ[f] in (3.28) we use the following well known argument. Set

f =¡√f νN

Λ[ √

f]¢2. By Rothaus inequality (see [1], Lemma 4.3.8)

EntνN

Λ(f)≤EntνΛN(f) + 2ν

N

Λ[ p

f ,pf].

Using this inequality and replacing f by f in (3.28) we get, for a different CL,

EntνN

Λ(f) ≤ CLEνΛN( p

f ,pf) +CLνΛN[ p

f ,pf] +CLs(N, L)EνN Λ(

p

f ,pf) ≤ DLEνN

Λ( p

f ,pf) +DLs(N, L)EνN Λ(

p

f ,pf) (3.29)

where, in the last line, we have used the Poincar´e inequality (1.3). Therefore

s(2L, N)DL[s(L, N) + 1], (3.30)

that implies (2.6), provided we prove the following “basis step” for the induction. Proposition 3.7

sup

N

s(2, N)<+

The proof of Proposition 3.7 is also given in Section 8. As we pointed out above, (3.30) gives no indication on how s(L) = supNs(N, L) grows withL. The proof of the actual L2-growth

is given in [3].

4

Representation of

ν

ΛN

[

f

|

η

Λ

1

=

n

]

ν

N

Λ

[

f

|

η

Λ1

=

n

1]:

proof of Proposition 3.2

We begin with a simple consequence of reversibility

Lemma 4.1 Let Λ = Λ1Λ2 be a partition of Λ, N N. Define γ(n) :=νN

Λ[ηΛ1 =n] and

τxyf :=∂xyf+f. Then:

νΛN[f1(ηx >0)|ηΛ1 =n] =       

γ(n−1)

γ(n) νΛN h

c(ηy)

c(ηx+1)τyxf ¯ ¯

¯ηΛ1 =n−1 i

for any x∈Λ1 and y∈Λ2

νN

Λ h

c(ηy)

c(ηx+1)τyxf ¯ ¯

¯ηΛ1 =ni for any x, y Λ2

for any f ∈L1(νN

(13)

Proof. Assume that y ∈ Λ2. We use first the detailed balance condition (2.3), then the change of variablesξ 7→ξ+δy−δx to obtain

νΛN£f1(ηx >0)|ηΛ1 =n ¤

= P

ξ:ξx>0ν

N

Λ[η =ξ]f(ξ)1(ξΛ1 =n)

νN

Λ[¯ηΛ1 =n] =

P

ξ:ξx>0ν

N

Λ[η=ξ−δx+δy]c(cξ(y+1)ξx) f(ξ)1(ξΛ1 =n)

νN

Λ[ηΛ1 =n]

=       

γ(n−1)

γ(n) ν

N

Λ h

c(ηy)

c(ηx+1)τyxf ¯ ¯

¯ηΛ1 =n1i if xΛ1

νN

Λ h

c(ηy)

c(ηx+1)τyxf ¯ ¯

¯ηΛ1 =ni if xΛ2

Lemma 4.2 Let Λ = Λ1Λ2 be a partition of Λ, N N. Define γ(n) :=νN

Λ[ηΛ1 =n] and

h:NR by h(n) := (n+ 1)/c(n+ 1). Then:

νΛN[f|ηΛ1 =n]−νΛN[f|ηΛ1 =n−1]

= γ(n−1)

nγ(n) Ã

νΛN

"

c(ηy)

X

x∈Λ1

h(ηx)∂yxf

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

ΛN

"

f, c(ηy)

X

x∈Λ1

h(ηx)

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #!

,

(4.1) for any f L1(νN

Λ), y∈Λ2 and n∈ {1, . . . , N}. Proof. We begin by writing

νΛN[f|ηΛ1 =n] =−1

n

X

x∈Λ1

νΛN£ηx∂xyf

¯

¯ηΛ1 =n¤+ 1

n

X

x∈Λ1

νΛN£ηxτxyf

¯

¯ηΛ1 =n¤.

By Lemma 4.1 we get

−νΛN £ηx∂xyf

¯

¯ηΛ1 =n ¤

= γ(n−1)

γ(n) ν

N

Λ "

c(ηy)

c(ηx+ 1)

(ηx+ 1)∂yxf

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

= γ(n−1)

γ(n) ν

N

Λ £

c(ηy)h(ηx)∂yxf

¯

¯ηΛ1 =n−1 ¤

,

and similarly

νΛN£ηxτxyf

¯

¯ηΛ1 =n ¤

= γ(n−1)

γ(n) ν

N

Λ £

c(ηy)h(ηx)f

¯

¯ηΛ1 =n−1 ¤

.

Thus

νN

Λ[f|ηΛ1 =n] = γ(n−1)

nγ(n) Ã

νΛN

"

c(ηy)

X

x∈Λ1

h(ηx)∂yxf

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

ΛN

"

c(ηy)f

X

x∈Λ1

h(ηx)

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #!

(14)

Lettingf ≡1 in this last formula we obtain

γ(n1)

nγ(n) ν

N

Λ "

c(ηy)

X

x∈Λ1

h(ηx)

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

= 1

that implies

γ(n1)

nγ(n) ν

N

Λ "

c(ηy)f

X

x∈Λ1

h(ηx)

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

ΛN£f|ηΛ1 =n1¤+γ(n−1)

nγ(n) ν

N

Λ "

f, c(ηy)

X

x∈Λ1

h(ηx)

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

and, therefore,

νN

Λ[f|ηΛ1 =n]−νΛN £

f|ηΛ1 =n

= γ(n−1)

nγ(n) Ã

νΛN

"

c(ηy)

X

x∈Λ1

h(ηx)∂yxf

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

ΛN

"

f, c(ηy)

X

x∈Λ1

h(ηx)

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #!

.

Proof of Proposition 3.2. Equation (3.6) is obtained by (4.1) by averaging overy ∈Λ2.

5

Bounds on

A

(

n

): proof of Proposition 3.3

Proof of Proposition 3.3. The Proposition is proved if we can show that for 1≤n < N/2

A2(n)≤ CL 2

N −n

¡

νΛN[f|ηΛ1 =n]∨νΛN[f|ηΛ1 =n−1] ¢

× ·

γ(n1)

γ(n) EνΛN[·|ηΛ1=n−1]( p

f ,pf) +EνN

Λ[·|ηΛ1=n]( p

f ,pf) ¸

,

while forn N/2

A2(n) CL 2

n

¡

νΛN[f|ηΛ1 =n]νΛN[f|ηΛ1 =n1]¢

× ·

γ(n1)

γ(n) EνΛN[·|ηΛ1=n−1]( p

f ,pf) +EνN

Λ[·|ηΛ1=n]( p

f ,pf) ¸

(15)

We will prove only this last bound being the proof of the previous one identical. So assume

nN/2 and notice that ∂yxf = (∂yx√f)(√f+τyx√f). By Cauchy-Schwartz inequality

(

γ(n−1)

nLγ(n) ν

N

Λ "

X

x∈Λ1, y∈Λ2

c(ηy)h(ηx)∂yxf

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #)2

≤ ·

γ(n1)

nLγ(n) ¸2

νΛN

" X

x∈Λ1, y∈Λ2

c(ηy)h(ηx)

³

∂yx

p

f´2

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

×νΛN

" X

x∈Λ1, y∈Λ2

c(ηy)h(ηx)

³p

f+τyx

p

f´2

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

≤2a2

·

γ(n1)

nLγ(n) ¸2

νΛN

" X

x∈Λ1, y∈Λ2

c(ηy)

³

∂yx

p

f´2

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

×νΛN

" X

x∈Λ1, y∈Λ2

c(ηy) (f +τyxf)

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

(5.2)

where a := khk+∞ ((2.4) implies boundedness of h). In order to bound the last factor in

(5.2) we observe that by Lemma 4.1

νΛN£c(ηy)τyxf

¯

¯ηΛ1 =n1¤ = γ(n)

γ(n1)ν

N

Λ £

c(ηx)f

¯

¯ηΛ1 =n¤,

so that

νΛN

" X

x∈Λ1, y∈Λ2

c(ηy) (f+τyxf)

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

= X

x∈Λ1, y∈Λ2 µ

νΛN£c(ηy)f

¯

¯ηΛ1 =n1¤+ γ(n)

γ(n1)ν

N

Λ £

c(ηx)f

¯

¯ηΛ1 =n¤

≤a

½

(Nn+ 1)LνΛN£f¯¯ηΛ1 =n1¤+ γ(n)nL

γ(n1)ν

N

Λ £

f¯¯ηΛ1 =n¤

¾

,

where the fact c(k)ak (see (2.4)) has been used to obtain the last line. This implies that

γ(n−1)

γ(n)nL ν

N

Λ "

X

x∈Λ1, y∈Λ2

c(ηy) (f +τyxf)

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

≤a

½

γ(n−1)(N −n+ 1)L γ(n)nL ν

N

Λ £

f¯¯ηΛ1 =n1¤+νΛN£f¯¯ηΛ1 =n¤

¾

≤B1 ¡

νN

Λ £

f¯¯ηΛ1 =nνN

Λ £

(16)

where in last step we used Proposition 7.5. By plugging this bound in (5.2) we get (

γ(n1)

nLγ(n) ν

N

Λ "

X

x∈Λ1, y∈Λ2

c(ηy)h(ηx)∂yxf

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #)2

≤B2 ¡

νΛN£f¯¯ηΛ1 =nνΛN£f¯¯ηΛ1 =n¤¢

× γnLγ(n−(n1)) νΛN

" X

x∈Λ1, y∈Λ2

c(ηy)h(ηx)

³

∂yx

p

f´2

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

. (5.3)

Now we have to bound the last factor in the right hand side of (5.3). Forx, y Z the path

between x and y will be the sequence of nearest-neighbor integers γ(x, y) ={z0, z1, . . . , zr}

ofZ such that fori= 1, . . . , r |zi1zi|= 1, fori6=j zi 6=j, z0 =xand zr =y. Obviously,

for x, y Λ, we have r = |γ(x, y)| ≤ 2L. Now let such a γ(x, y) be given. Notice that if

ηy ≥1, we can write

(∂yx

p

f)(η) =

r−1 X

k=0 ³

∂zk+1zk p

f´(ηδzr +δzk+1).

By Jensen inequality, we obtain

c(ηy)

³

∂yx

p

f´2(η)≤2Lc(ηy) r−1 X

k=0 ³

∂zk+1zk p

f´2(η−δzr +δzk+1),

so

γ(n1)

γ(n)nL ν

N

Λ "

X

x∈Λ1,y∈Λ2

c(ηy)h(ηx)

³

∂yx

p

f´2

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

≤ 2γγ((nn)n1) X

x∈Λ1,y∈Λ2

r−1 X

k=0

νΛN

"

c(ηy)h(ηx)

³

τzrzk+1∂zk+1zk p

f´2

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

≤ 2aγ(n−1)

γ(n)n

X

x∈Λ1,y∈Λ2

r−1 X

k=0

νΛN

"

c(ηy)

³

τyzk+1∂zk+1zk p

f´2

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

,

where againa=khk+∞. By Lemma 4.1

νΛN

"

c(ηy)

³

τyzk+1∂zk+1zk p

f´2

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 # =       

γ(n)

γ(n−1)ν

N

Λ h

c(ηzk+1) ¡

∂zk+1zk √

f¢2¯¯¯ηΛ1 =ni if zk+1 ∈Λ1

νN

Λ h

c(ηzk+1) ¡

∂zk+1zk √

(17)

which implies

γ(n1)

nLγ(n) ν

N

Λ "

X

x∈Λ1,y∈Λ2

c(ηy)h(ηx)

³

∂yx

p

f´2

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

≤ 2na X

x∈Λ1,y∈Λ2 (

X

k:zk+1∈Λ1

νΛN

"

c(ηzk+1) ³

∂zk+1zk p

f´2

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 #

+

γ(n−1)

γ(n)

X

k:zk+1∈Λ2

νΛN

"

c(ηzk+1) ³

∂zk+1zk p

f´2

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 # )

≤ 2aL 2

n

X

x,y∈Λ

|x−y|=1 (

νΛN

"

c(ηx)

³

∂xy

p

f´2

¯ ¯ ¯ ¯

¯ηΛ1 =n #

+

γ(n1)

γ(n) ν

N

Λ "

c(ηx)

³

∂xy

p

f´2

¯ ¯ ¯ ¯

¯ηΛ1 =n−1 # )

.

By plugging this bound in (5.3) we get (5.1).

6

Rough estimates: proofs of Propositions 3.5 and 3.6

Proof of Proposition 3.5. We begin with the proof of (3.20). Denote by ΩN

Λ the set of configurations havingN particles in Λ. Then

νΛN[{η}] = 1(η∈Ω

N

Λ)

ZN

Λ

Y

x∈Λ 1

c(ηx)!

,

whereZN

Λ is a normalization factor. It is therefore easily checked that

c(ηx)νΛN[{η}] =

ZΛN−1 ZN

Λ

νΛN−1[{η−δx}] (6.1)

for any x∈Λ. Thus, for every function f,

νN

Λ[c(ηx)f] =

ZΛN−1 ZN

Λ

νN−1

Λ [σxf], (6.2)

where σxf(η) = f(η+δx). Letting f ≡ 1 in (6.2) we have ZΛN−1

ZN Λ = ν

N

Λ[c(ηx)], and so we

rewrite (6.2) as

νΛN[c(ηx)f] =νΛN[c(ηx)]νΛN−1[σxf]. (6.3)

Now, let t >0, and define

(18)

We now estimateϕ(t) by means of the so-called Herbst argument (see [1], Section 7.4.1). By direct computation, Jensen’s inequality and (6.3), we have

tϕ′(t)ϕ(t) logϕ(t) = tνΛN£c(ηx)etc(ηx)

¤

−νΛN £etc(ηx)¤logνΛN£etc(ηx)¤

≤ tνΛN£c(ηx)etc(ηx)

¤

−tνΛN[c(ηx)]νΛN £

etc(ηx)¤

= tX

x∈Λ

νΛN[c(ηx)]

¡

νΛN−1£etc(ηx+1)¤−νΛN£etc(ηx)¤¢. (6.4)

We now claim that, for everyxΛ and 0t1 ¯

¯νΛN−1£etc(ηx+1)¤−νΛN £etc(ηx)¤¯¯ ≤CLtνΛN £

etc(ηx)¤, (6.5) for some constantCLdepending onLbut not onN. For the moment, let us accept (6.5), and

show how it is used to complete the proof. By (6.4), (6.5) and the fact thatνN

Λ[c(ηx)]≤CN/L

we get

tϕ′(t)ϕ(t) logϕ(t)CLN t2ϕ(t),

for 0≤t ≤1 and some possibly different CL. Equivalently, letting ψ(t) = logϕt(t),

ψ′(t)CLN. (6.6)

Observing that limt↓0ψ(t) =νΛN[c(ηx)], by (6.6) and Gronwall lemma we have

ψ(t)νN

Λ[c(ηx)] +CLN t,

from which (3.20) easily follows, for t0.

We now prove (6.5). We shall use repeatedly the inequality

|ex−ey| ≤ |x−y|e|x−y|ex, (6.7) which holds for x, y R. Using (6.7) and the fact that, by condition (LG), kc(ηx+ 1)

c(ηx)k∞ ≤a1, we have

¯

¯νΛN−1£etc(ηx+1)¤−νΛN−1£etc(ηx)¤¯¯≤a1tea1tνΛN−1 £

etc(ηx)¤,

from which we have that (6.5) follows if we show ¯

¯νΛN−1£etc(ηx)¤−νΛN£etc(ηx)¤¯¯ ≤CLtνΛN £

etc(ηx)¤. (6.8) At this point we use the notion of stochastic order between probability measures. For two probabilities µ and ν on a partially ordered space X, we say that ν µ if R f dν R f dµ

for every integrable, increasingf. This is equivalent to the existence of amonotone coupling of ν and µ, i.e. a probability P on X×X supported on {(x, y) X×X :x y}, having marginals ν and µrespectively (see e.g. [6], Chapter 2).

As we will see, (6.8) would not be hard to prove if we hadνΛN−1 νN

Λ. Under assumptions (LG) and (M) a slightly weaker fact holds, namely that there is a constantB >0 independent ofN andL such that ifN ≥N′+BLthen νN′

(19)

we may assume that N > BL. Indeed, in the caseN ≤BL there is no real dependence on

N, and (3.20) can be proved by observing that

νΛNhet(c(ηx)−νΛN[c(ηx)]) i

≤1 + t 2 2ν

N

Λ[c(ηx), c(ηx)]e2tsup{c(ηx):η∈Ω

N

Λ} ≤1 +C

Lt2 ≤eCLt

2

≤eCLN t2

as 1≤N ≤BL ((3.20) is trivial for N = 0). ForN ≥BL we use the rough inequality ¯

¯νΛN−1£etc(ηx)¤νΛN£etc(ηx)¤¯¯ ¯¯νΛN−1£etc(ηx)¤νΛN−1−BL£etc(ηx)¤¯¯

+¯¯νΛN£etc(ηx)¤−νΛN−1−BL£etc(ηx)¤¯¯. (6.9) Denote byQthe probability measure onNΛ×that realizes a monotone coupling ofνN−1

Λ and νΛN−1−BL. In other words,Q has νΛN−1 and νΛN−1−BL as marginals, and

Q[{(η, ξ) : ηx ≥ξx ∀x∈Λ}] = 1. (6.10)

Using again (6.7) ¯

¯νΛN−1£etc(ηx)¤−νΛN−1−BL£etc(ηx)¤¯¯ = ¯¯Q£etc(ηx)−etc(ξx)¤¯¯

≤ tQ£|c(ηx)−c(ξx)|et|c(ηx)−c(ξx)|etc(ηx)

¤

. (6.11) Since, Q-a.s., ηx ≥ ξx ∀x ∈ Λ and ηΛ = ξΛ+BL, then necessarily ηx ≤ ξx +BL ∀x ∈ Λ.

Thus, it follows that, for some C >0,

|c(ηx)−c(ξx)| ≤CL Q-a.s.

Inserting this inequality in (6.11), we get, for t1 and some CL>0,

¯

¯νΛN−1£etc(ηx)¤−νΛN−1−BL£etc(ηx)¤¯¯ ≤CLtνΛN−1 £

etc(ηx)¤. (6.12) With the same arguments, it is shown that

¯

¯νΛN£etc(ηx)¤−νΛN−1−BL£etc(ηx)¤¯¯ ≤CLtνΛN £

etc(ηx)¤. (6.13) In order to put together (6.12) and (6.13), we need to show that, for 0t1,

νΛN−1£etc(ηx)¤ CLνΛN £

etc(ηx)¤, (6.14) for some L-dependent CL. By condition (LG) and (6.3) we have

νΛN−1£etc(ηx)¤≤eta1νN−1 Λ

£

etc(ηx+1)¤= e

ta1

νN

Λ[c(ηx)]

νΛN £

c(ηx)etc(ηx)

¤

≤CLνΛN £

etc(ηx)¤,

where, for the last step, we have used the facts that, for some ǫ > 0, νN

Λ[c(ηx)] ≥ ǫNL and

c(ηx)≤ǫ−1N νΛN-a.s. (for both inequalities we use (2.4)).

(20)

We now prove (3.21) The idea is to use the fact that the tails of N(h(ηx)−νΛN[h(ηx)])

are not thicker than those of c(ηx)−νΛN[c(ηx)]. Similarly to (6.1), note that

h(ηx)νΛN[{η}] =

ZΛN+1 ZN

Λ

(ηx+ 1)νΛN+1[{η+δx}], (6.15)

which implies

νΛN[h(ηx)f] =

ZΛN+1 ZN

Λ

νΛN+1[ηxf(η−δx)]. (6.16)

Lettingf 1 in (6.16) we obtain

ZΛN+1 ZN

Λ

ΛN[h(ηx)]

L N + 1

that, together with the previously obtained identity ZNΛ−1

ZN Λ =ν

N

Λ[c(ηx)], yields

νΛN[h(ηx)] =

N + 1

LνN+1 Λ [c(ηx)]

. (6.17)

Thus

|h(ηx)−νΛN[h(ηx)]|=

¯ ¯ ¯ ¯

ηx+ 1

c(ηx+ 1) −

N + 1

ΛN+1[c(ηx)]

¯ ¯ ¯ ¯

≤ 1

c(ηx+ 1)νΛN+1[c(ηx)]

·

(ηx+ 1)

¯

¯c(ηx+ 1)−νΛN+1[c(ηx)]

¯

¯+c(ηx+ 1)

¯ ¯ ¯

¯ηx+ 1−

N + 1

L

¯ ¯ ¯ ¯ ¸

≤ C

N

·¯

¯c(ηx+ 1)−νΛN+1[c(ηx)]

¯ ¯+

¯ ¯ ¯

¯ηx+ 1−

N + 1

L

¯ ¯ ¯ ¯ ¸

, (6.18)

where, in the last step, we use the fact thatνΛN+1[c(ηx)]≥ǫN/Lfor someǫ >0. In (6.18) and

in what follows, the L-dependence of constants is omitted. For ρ > 0, let c(ρ) be obtained by linear interpolation ofc(n), n∈N. Observe that, by (1.3) with f(η) =ηx,

νN

Λ[ηx, ηx]≤CL2νΛN[c(ηx)]≤CLN

for some CL>0 that depends on L. Thus, by Condition (LG)

¯

¯νΛN[c(ηx)]−c(N/L)

¯

¯c1νΛN ·¯¯

¯ ¯ηx−

N L

¯ ¯ ¯ ¯ ¸

≤c1 q

νN

Λ[ηx, ηx]≤C √

N , (6.19)

for some C > 0, possibly depending on L. From (6.18) and (6.19) it follows that, for some

C >0

N|h(ηx)−νΛN[h(ηx)]| ≤C

¯ ¯ ¯ ¯ηx−

N L

¯ ¯ ¯ ¯+C

N . (6.20)

Moreover, from Condition (M) it follows that there is a constantC > 0 such that ¯

¯ ¯ ¯ηx−

N L

¯ ¯ ¯

(21)

Thus, for everyM > 0 there exists C >0 such that ¯

¯c(ηx)−νΛN[c(ηx)]

¯

¯M ⇒N|h(ηx)−νΛN[h(ηx)]| ≤CM +C

N . (6.21) It follows that, for allM >0

νΛNhN(h(ηx)−νΛN[h(ηx)])) > CM +C

Ni ≤νΛN£¯¯c(ηx)−νΛN[c(ηx)]

¯ ¯> M¤

≤νΛN[c(ηx)−νΛN[c(ηx)]> M] +νΛN[νΛN[c(ηx)]−c(ηx)> M]. (6.22)

Note that (6.22) is trivially true forM ≤0, so it holds for all M ∈R.

Now, take t∈(0,1]. We have

νΛNhetN(h(ηx)−νΛN[h(ηx)]) i

= t

Z +∞

−∞

etzνΛN[N(h(ηx)−νΛN[h(ηx)])> z]dz

≤ t

Z +∞

−∞

etzνΛN hc(ηx)−νΛN[c(ηx)]>

z C −C

Nidz+

+t

Z +∞

−∞

etzνN

Λ h

νN

Λ[c(ηx)]−c(ηx)>

z C −C

Nidz

= Ct

Z +∞

−∞

et(CM+C√N)νΛN[c(ηx)−νΛN[c(ηx)]> M]dM

+Ct

Z +∞

−∞

et(CM+C√N)νΛN[νΛN[c(ηx)]−c(ηx)> M]dM

= CeCt√N³νΛNheCt(c(ηx)−νΛN[c(ηx)]) i

ΛN he−Ct(c(ηx)−νΛN[c(ηx)]) i´

≤ 2CeCt√NeAN t2,

where, in the last step, we have used (3.20). This completes the proof for the case t(0,1]. For t [1,0) we proceed similarly, after having replaced, in (6.22), N(h(ηx)−νΛN[h(ηx)])

with its opposite.

Proof of Proposition 3.6. We first prove inequality (3.22). Clearly, it is enough to show that, for all xΛ,

νΛN[f, c(ηx)]2 ≤CLN νΛN[f] EntνN Λ(f), for some CL >0

By the entropy inequality (3.19) and (3.20) we have, for 0< t1:

νΛN[f, c(ηx)]2 ≤2νΛN[f]2A2LN2t2+

2

t2 Ent 2

νN

Λ(f). (6.23) Set

t2 = EntνNΛ(f)

N νN

Λ[f]

(22)

Suppose, first, thatt ≤1. Then if we insert t in (6.23) we get

νΛN[f, c(ηx)]2 ≤2νΛN[f]A2LNEntνN

Λ(f) + 2N ν

N

Λ[f] EntνN

Λ(f) =:CLN ν

N

Λ[f] EntνN

Λ(f). (6.25) In the case t >1, we easily have

νN

Λ [f, c(ηx)]2 ≤CνΛN[f]2N2 ≤CνΛN[f]2N2t∗2 =CN νΛN[f] EntνN

Λ(f). (6.26) By (6.25) and (6.26) the conclusion follows.

Let us now prove (3.23). As before, by (3.19) and (3.21), for t ∈(0,1]:

N2νΛN[f, h(ηx)]2 ≤

2νN

Λ[f]2

t2 £

log2AL+A2Lt2N +A2LN2t4

¤ + 2

t2 Ent 2

νN

Λ(f). (6.27) Here we set

t2 = EntνNΛ(f)∨ν

N

Λ[f]

N νN

Λ[f]

,

and proceed as in (6.25) and (6.26).

7

Local limit theorems

The rest of the paper is devoted to the proofs of all technical results that have been used in previous sections. For the sake of generality, we state and prove all results in dimension

d≥1.

Using the language of statistical mechanics, having defined the canonical measure νN

Λ, for ρ >0 we consider the corresponding grand canonical measure

µρ[{η}] :=

α(ρ)ηΛνηΛ Λ [{η}] P

ξ∈ΩΛα(ρ)

ξΛνξΛ Λ [{ξ}]

= 1

Z(α(ρ)) Y

x∈Λ

α(ρ)ηx

c(ηx)!

, (7.1)

whereα(ρ) is chosen so thatµρ(ηx) =ρ,x∈Λ, andZ(α(ρ)) is the corresponding

normaliza-tion. Clearly µρ is a product measure with marginals given by (1.2). Monotonicity and the

Inverse Function Theorem for analytic functions, guarantee that α(ρ) is well defined and it is a analytic function ofρ∈[0,+∞). We state here without proof some direct consequences of Conditions 2.1 and 2.2. The proofs of some of them can be found in [5].

Proposition 7.1 1. Let σ2(ρ) :=µ

ρ[ηx, ηx], then

0<inf

ρ>0

σ2(ρ)

ρ ≤supρ>0

σ2(ρ)

ρ <+∞ (7.2)

2. Let α(ρ) be the function appearing in (7.1); then

0< inf

ρ>0

α(ρ)

ρ ≤supρ>0

α(ρ)

(23)

7.1

Local limit theorems for the grand canonical measure

The next two results are a form of local limit theorems for the density of ηΛ under µρ.

Define pρΛ(n) := µρ[ηΛ = n] for ρ > 0, n ∈ N and Λ ⊂⊂ Zd. The idea is to get a Poisson approximation ofpN/Λ |Λ|(n) for very small values ofN/|Λ|and to use the uniform local limit theorem (see Theorem 6.1 in [5]) for the other cases.

Lemma 7.2 For every N0 ∈N\ {0} there exist A0 >0 such that sup

N∈N\{0}, nN

n≤N≤N0 ¯ ¯ ¯

¯pN/Λ |Λ|(n)−

Nn

n! e

−N

¯ ¯ ¯ ¯≤

A0 |Λ|

for any Λ⊂⊂Zd.

Proof. Letρ:=N/|Λ| and assume, without loss of generality, that 0Λ. Notice that

µρ[ηΛ=n] =µρ

h

ηΛ=n¯¯¯max

x∈Λ ηx ≤1 i

µρ

h max

x∈Λ ηx ≤1 i

+µρ

h

ηΛ =n¯¯¯max

x∈Λ ηx >1 i

µρ

h max

x∈Λ ηx >1 i

.

We begin by proving that

µρ

h max

x∈Λ ηx >1 i

=O(|Λ|−1), (7.4)

uniformly in 0< N N0. Indeed

µρ

h max

x∈Λ ηx >1 i

= 1(1µρ[η0 >1])|Λ|, and

µρ[η0 >1] = 1

Z(ρ) +∞

X

k=2

α(ρ)k

c(k)! =

α(ρ)2

Z(ρ) +∞

X

k=0

α(ρ)k

c(k+ 2)! =

α(ρ)2

Z(ρ) +∞

X

k=0

c(k)!

c(k+ 2)!

α(ρ)k

c(k)!.

Since c(k)!/c(k + 2)! is uniformly bounded, we have µρ[η0 >1] ≤ B1α(ρ)2 = O(|Λ|−2), uniformly in 0< N ≤N0. Thus

(1−µρ[η0 >1])|Λ| = ¡

1−O(|Λ|−2)¢|Λ|=O(|Λ|−1), which establishes (7.4).

Now let

e

ρ:= +∞

X

k=0

kµρ

h

η0 =k ¯ ¯ ¯η0 ≤1

i

.

A trivial calculation shows that

e

ρ= µρ[η01(η0 ≤1)]

µρ[η0 ≤1]

= µρ[η0]−µρ[η01(η0 >1)]

µρ[η0 ≤1]

= ρ−µρ[η01(η0 >1)]

µρ[η0 ≤1]

(24)

Moreover

µρ[η01(η0 >1)] = 1

Z(ρ) +∞

X

k=2

kα(ρ)k

c(k)! =

α(ρ)2

Z(ρ) +∞

X

k=0

(k+ 2)α(ρ)k

c(k+ 2)!

≤ B2α(ρ) 2

Z(ρ) +∞

X

k=0

(k+ 2)α(ρ)k

c(k)! =B2α(ρ)

2(ρ+ 2) =O(|Λ|−2),

and finally

e

ρ= ρ+O(|Λ|

−2)

1 +O(|Λ|−2) =ρ+O(|Λ|−

2). (7.5)

Observe that for any n ∈ {0, . . . ,|Λ|}, we have

µρ

h

ηΛ =n¯¯¯max

x∈Λ ηx ≤1 i

= µ

|

n

¶ e

ρ(1ρe)|Λ|−n. (7.6)

This comes from the fact that the random variables {ηx :x∈Λ}, under the probability

measure µρ[·|maxx∈Ληx ≤ 1], are Bernoulli independent random variables with mean ρe.

The remaining part of the proof follows the classical argument of approximation of the binomial distribution with the Poisson distribution. Using (7.5) and (7.6), after some simple calculations we get

µρ

h

ηΛ =n ¯ ¯ ¯max

x∈Λ ηx ≤1 i

= µ

|Λ|

n

¶ e

ρ(1−ρe)|Λ|−n

= |Λ|!

n!(|Λ| −n)! ·

N

| +O(|Λ|

−2) ¸n·

1 N

| +O(|Λ|

−2) ¸|Λ|−n

= 1

n! £

N +O(|Λ|−1)¤n ·

1 N

|Λ|+O(|Λ|

−2) ¸|Λ|

×|Λ|(|Λ| −1)· · · · ·(|Λ| −n+ 1) |Λ|n

· 1− N

| +O(|Λ|

−2) ¸−n

= N

n

n! e

−N +O(|Λ|−1)

uniformly in 0< N ≤N0 and 0 ≤n < N. This proves that there exist positive constantsv1 and B3 such that if Λ⊂⊂Zd is such that |Λ|> v1 then

¯ ¯ ¯

¯pN/Λ |Λ|(n)−

Nn

n! e

−N

¯ ¯ ¯ ¯≤

B3 |Λ|

uniformly in N N\ {0} with N N0 and n N with n N. The general case follows

easily because the set ofn ∈N,N N\ {0}and Λ ⊂⊂Zd such that nN N0, |Λ| ≤v1

and 0Λ is finite.

(25)

1.

sup

n∈N

¯ ¯ ¯ ¯

p

σ2(ρ)|Λ|pρ Λ(n)−

1 √

2πe

−(n2σ2(ρ)−ρ|Λ||Λ)2|¯¯¯

¯≤

A0 p

σ2(ρ)|Λ| (7.7) for any ρρ0 and any Λ⊂⊂Zd such that σ2(ρ)|Λ| ≥n0;

2.

sup

ρ>ρ0

n∈N

¯ ¯ ¯ ¯

p

σ2(ρ)|Λ|pρ Λ(n)−

1 √

2πe

−(n2σ2(ρ)−ρ|Λ||Λ)2|¯¯¯

¯≤

A0 p

| (7.8)

for any Λ⊂⊂Zd such that |Λ| ≥v0.

Proof. This is a special case of the Local Limit Theorem forµρ (see Theorem 6.1 in [5]).

We conclude this section with a bound on the tail ofpρΛwhich will be used in the regimes not covered by Lemma 7.2 or Proposition 7.3.

Lemma 7.4 There exists a positive constant A0 such that

ρ|Λ|

A0(n+ 1) ≤

Λ(n+ 1)

Λ(n) ≤

A0ρ|Λ|

n+ 1 for any n N\ {0}, Λ⊂⊂Zd and ρ >0.

Proof. Notice that

Λ(n+ 1) =µρ[ηΛ=n+ 1] = 1

n+ 1 X

x∈Λ

µρ[ηx,1(ηΛ =n+ 1)] and, by the change of variableξ :=σδx:

µρ[ηx1(ηΛ =n+1)] = X

σ∈ΩΛ

µρ[ηΛ =σ]σx1(σΛ=n+1) = X

σ∈ΩΛ

µρ[ηΛ =σ]σx1(σΛ =n+1, σx >0)

= X

ξ∈ΩΛ

µρ[ηΛ=ξ+δx](ξx+ 1)1(ξΛ =n) = X

ξ∈ΩΛ

µρ[ηΛ=ξ]

µρ[ηΛ =ξ+δx]

µρ[ηΛ =ξ]

(ξx+ 1)1(ξΛ=n)

= X

ξ∈ΩΛ

µρ[ηΛ=ξ]

α(ρ)(ξx+ 1)

c(ξx+ 1)

1(ξΛ =n) =α(ρ)µρ

·

ηx+ 1

c(ηx+ 1)

, ηΛ =n

¸

.

This means that

Λ(n+ 1) = α(ρ)

n+ 1 X

x∈Λ

µρ

·

ηx+ 1

c(ηx+ 1)

1(ηΛ =n) ¸

. (7.9)

By (2.4) we know that there exists a positive constantB0 such that

B0−1 ≤ c(k)

k ≤B0 for anyk ∈N\ {0}.

Thus, by plugging these bounds in (7.9), we get

α(ρ)|Λ|Λ(n)

B0(n+ 1) ≤p

ρ

Λ(n+ 1)≤

B0α(ρ)|Λ|pρΛ(n)

(26)

7.2

Gaussian estimates for the canonical measure

In this section we will prove some Gaussian bounds on νN

Λ[ηΛ′ = ·], when the volumes |Λ|

and|Λ′|are of the same order (typically it will be|Λ|/|Λ|= 1/2). These bounds are volume

dependent (see Proposition 7.8 below) and so are of limited utility. However they will be used to prove Proposition 7.9 which, in turn, is used in Section 8 to regularize νN

Λ[ηΛ′ =·]

and prove Gaussian uniform estimates on it.

Assume Λ′ Λ⊂⊂Zd,N N\{0}and consider the probability measure on{0,1, . . . , N}

νΛN[ηΛ′ =n] =

Λ′(n)p

ρ

Λ\Λ′(N −n)

Λ(N) . Notice that νN

Λ[ηΛ′ =·] does not depend on ρ > 0 (and depends on Λ and Λ′ only through

| and |Λ′|). Indeed a simple computation shows that

νΛN[ηΛ=n] = P

σ∈ΩΛ′

1(σΛ′=n) Q

x∈Λ′c(σx)! P

ξ∈ΩΛ\Λ′

1(ξΛ\Λ′=N−n) Q

y∈Λ\Λ′c(ξy)! PN

n=0 P

σ∈ΩΛ

1(σΛ′=n) Q

x∈Λ′c(σx)! P

ξ∈ΩΛ\Λ

1(ξΛ\Λ′=N−n) Q

y∈Λ\Λ′c(ξy)!

,

for anyn∈ {0,1, . . . , N}. A particular case is when Λ′ Λ⊂⊂Zdis such that|Λ|=|Λ\Λ|.

In this case we define

γΛN(n) :=νΛN[ηΛ′ =n] =

Λ′(n)p

ρ

Λ\Λ′(N −n)

Λ(N) =

Λ′(n)pρΛ′(N−n)

Λ(N) . (7.10) We begin by proving a simple result on the decay of the tails of γN

Λ.

Proposition 7.5 There exist a positive constant A0 such that for every Λ⊂⊂Zd such that| ≥2 and every N N\ {0}

N −n A0(n+ 1) ≤

γN

Λ(n+ 1)

γN

Λ(n)

≤ A0(N −n)

(n+ 1) , (7.11)

for any n ∈ {0, . . . , N −1}.

Proof. Let Λ′ Λ be a non empty lattice set. Then by (7.10)

γN

Λ(n+ 1)

γN

Λ(n)

= p

ρ

Λ′(n+ 1)p

ρ

Λ\Λ′(N −n−1)

Λ′(n)pΛρ\Λ′(N −n)

,

and (7.11) follows from Lemma 7.4.

Next we prove Gaussian bounds on γN

Λ. This is a very technical argument. We begin with the case |Λ|= 2.

Lemma 7.6 Assume that |Λ|= 2, and define N¯ :=⌈N/2⌉ for N ∈ N\ {0}. There exist a

positive constant A0 such that 1

A0 √¯

Ne

−A0(n−N)2¯ ¯

N ≤γN Λ(n)≤

A0 √¯

Ne

−(nA0N)2N¯¯

, (7.12)

(27)

Proof. We split the proof in several steps for clarity purpose.

Step 1. There exists A1 >0 such that for any N ∈N\ {0},

logγΛN(n−1)−logγΛN(n)≤ −N¯ −n

A1N¯

+A1

N (7.13)

for any n∈ {1, . . . ,N¯ 1} and

logγΛN(n+ 1)logγΛN(n)≤ −n−N¯

A1N¯

+A1

N (7.14)

for any n∈ {N , . . . , N¯ 1}.

Proof of Step 1. Assumen∈ {N , . . . , N¯ −1}, the other case will follow by a symmetry argument sinceγN

Λ(n) =γΛN(N −n). By (7.10) we obtain, in the case |Λ|= 2, that

γN

Λ(n+ 1)

γN

Λ(n)

= c(N −n)

c(n+ 1) .

If n+ 1(N n) k0, where k0 is the constant which appears in Condition 2.2, then by Condition 2.1 and (2.4) there exist a constant B1 >0 such that

c(N n)

c(n+ 1) = 1−

c(n+ 1)c(N n)

c(n+ 1) ≤1 +

|c(n+ 1)c(N n)|

c(n+ 1)| ≤1 +

a1k0B1

n+ 1 and, since log(1 +x)x and n N¯, there exist a constantB2 >0 such that

logγNΛ(n+ 1)−logγΛN(n)≤ a1k0B1

n+ 1 ≤

B2

N .

Since n+ 1(N n) k0 we have that n−N¯ ≤(k0−1)/2, which implies (n−N¯)/N¯ ≤ (k0−1)/N. Thus

B2

N ≤ B2

N +

k0−1

N −

n−N¯

¯

N ≤ B3

N −

n−N¯

¯

N

and so

logγΛN(n+ 1)−logγΛN(n)≤

B3

N −

n

¯

N

in this case. Assume now that n+ 1−(N −n) > k0; more precisely assume that rk0 <

n+ 1−(N −n)≤(r+ 1)k0 for some r∈N\ {0}. Then

logγΛN(n+ 1)−logγΛN(n) = logc(N−n)−logc(n+ 1) =

r

X

s=1

(28)

For anys ∈ {1,2, . . . , r}, by Condition 2.2 and the fact that log(1−x)≤ −x, we obtain

logc(N−n+k0(s−1))−logc(N−n+k0s) = log ·

1− c(N −n+k0s)−c(N −n+k0(s−1))

c(N n+k0s)

¸

≤ −c(N −n+k0s)−c(N −n+k0(s−1))

c(N −n+k0s) ≤ −

a2

c(N −n+k0s)

.

By (2.4) we know that c(N n +k0s) ≤ a1(N −n +k0s) and because N −n +k0s ≤

N n+k0r≤n+ 1 we get

logc(N n+k0(s1))logc(N n+k0s)≤ −

a2

a1(n+ 1). (7.16) Furthermore by Condition 2.1 and the fact that log(1x)≤ −xwe obtain

logc(N n+k0r)−logc(n+ 1) = log ·

1 c(n+ 1)−c(N −n+k0r)

c(n+ 1)

¸

≤ −c(n+ 1)−c(N −n+k0r)

c(n+ 1) ≤

|c(n+ 1)−c(N−n+k0r)|

c(n+ 1) ≤

a1

c(n+ 1), and by (2.4) we have that there existsB4 >0 such that c(n+ 1)≥B4−1(n+ 1). Thus, since

n+ 1 >N¯, we have

logc(N n+k0r)−logc(n+ 1)≤

2a1B4

N . (7.17)

By plugging the bounds (7.17) and (7.16) into (7.15) we obtain

logγN

Λ(n+ 1)−logγNΛ(n)≤ −

ra2

a1(n+ 1) + 2a1B4

N . (7.18)

Recalling that (r + 1)k0 ≥ n + 1− (N − n) = 2n −N + 1 and k0 ≥ 1, we have rk0 ≥

n+ 1(N n) = 2nN and

r n+ 1 ≥

2nN k0(n+ 1) ≥

2nN k0N ≥

nN¯ k0N¯ which together with (7.18) completes the proof of (7.14).

Step 2. There exists A2 >0 such that if N ∈N\ {0} then, logγΛN(n1)logγΛN(n)≥ −A2

¯

N n

¯

N − A2

N (7.19)

for any n∈ {⌈N/4, . . . ,N¯ 1} and

logγΛN(n+ 1)−logγΛN(n)≥ −A2

n

¯

N − A2

N (7.20)

(29)

Proof of Step 2. Assume n ∈ {N , . . . ,¯ ⌊3N/4⌋}, the other case again will follow by symmetry as in Step 1. Notice that

logγΛN(n+ 1)−logγΛN(n) = logc(N−n)−logc(n+ 1)

=

2nX−N+1

s=1

[logc(N −n+s−1)−logc(N −n+s)]. (7.21)

Moreover

logc(N n+s1)logc(N n+s) = log ·

1 c(N −n+s)−c(N −n+s−1)

c(N n+s)

¸

≥log ·

1 |c(N −n+s)−c(N −n+s−1)|

c(N n+s)

¸ ;

by Condition 2.1 and (2.4) there exist B1 >0 such that |c(N−n+s)−c(N −n+s−1)|

c(N n+s) ≤

B1

N n+s ≤ B1

N n ≤

4B1

N ,

where we used the fact thatn ≤3N/4; thus logc(N −n+s−1)−logc(N −n+s)

≥log ·

1|c(N −n+s)−c(N −n+s−1)|

c(N −n+s)

¸ ≥log

µ

14B1

N

.

Since log(1x)≥ −2x, for x[0,1/2], then

log µ

1 4B1

N

≥ −8B1N

for N ≥n1 := 8B1. Thus

logc(N n+s1)logc(N n+s)≥ −8B1

N .

By plugging this bound into (7.21) we obtain

logγN

Λ(n+ 1)−logγΛN(n)≥ −8B1

2nN + 1

N ≥ −16B1 n

¯

N −

8B1 ¯

N ,

which completes the proof of (7.20) in the case N n1. The general case is obtained by a a finiteness argument.

Step 3. There exist A3 >0 such that for any N ∈N\ {0},

γN

Λ(n)

γN

Λ( ¯N)

≤A3e− (n−N)2¯

A3, (7.22)

(30)

Proof of Step 3. Assume that n ∈ {N¯ + 1, . . . , N}. By (7.14) there exists B1 > 0 such that

logγΛN(n)logγΛN( ¯N) =

n−1 X

k= ¯N

£

logγΛN(k+ 1)logγΛN(k)¤

n−1 X

k= ¯N

µ

B1

N −

kN¯ B1N¯

. (7.23)

Using the fact that n−N¯ ≤N/2, and some elementary computation we obtain

n−1 X

k= ¯N

µ

B1

N −

k−N¯ B1N¯

≤ B1(n−N¯)

N −

1

B1N¯

n−XN¯−1

k=1

k ≤ B1 2 −

(n−N¯)(n−N¯ −1)

B1N

= B1 2 −

(nN¯)2

B1N

+n−N¯

B1N ≤

B1−

(nN¯)2

B1N

,

which plugged into (7.23) implies (7.22) forn ∈ {N¯+1, . . . , N}. The casen

Referensi

Dokumen terkait

Tujuan perancangan multimedia interaktif ini adalah untuk digunakan sebagai sarana panduan informasi serta sebagai sarana untuk melestarikan dan memperkenalkan

Selain itu tidak adanya informasi mengenai hasil belajar mengajar atau rata-rata nilai mata pelajaran setiap kelasmenyebabkan kepala sekolah mengalami kesulitan dalam

menempatkan kafe, salon, atau tempat-tempat perkumpulan sebagai arena dalam melakukan diskusi publik ( publik sphere ), kini internet merupakan arena virtual ( virtual sphere )

Tinggi rendahnya tingkat penerapan masing-masing komponen sangat dipengaruhi oleh ukuran organisasi (perusahaan), kompleksitas perusahaan, jenis perusahaan, filosofi

[r]

Penyedia barang / jasa yang diundang untuk mengikuti pengadaan pekerjaan tersebut. diatas dengan undangan nomor : 06/ULP/III/2009 Tanggal 16 Maret 2009

Dari keempat model yang dapat digunakan untuk mendeteksi earnings management yaitu, Healy Model, DeAngelo Model, Jones model, Modified Jones model dan Industry model, diuji

To make A good correspondences there are some points should have to be written in the letter, they are head letter, body of paragraph and closing.Moreover, we need to