ELECTRONIC COMMUNICATIONS in PROBABILITY
ALMOST SURE LIMIT THEOREM FOR THE MAXIMA OF STRONGLY
DEPENDENT GAUSSIAN SEQUENCES
FUMING LIN1
Department of Mathematics, Sichuan University of Science and Engineering, Huixin Road, Zigong 643000, Sichuan, China.
email: [email protected]
SubmittedDecember 2, 2008, accepted in final formApril 27, 2009 AMS 2000 Subject classification: 60F05; 62E20; 62F12; 62M10
Keywords: Almost sure central limit theorem, Strongly dependent sequence, Logarithmic average
Abstract
In this paper, we prove an almost sure limit theorem for the maxima of strongly dependent Gaus-sian sequences under some mild conditions. The result is an expansion of the weakly dependent result of E. Cs´aki and K. Gonchigdanzan.
1
Introduction and main result
In past decades, the almost sure central limit theorem (ASCLT) has been studied for independent and dependent random variables more and more profoundly. Cheng et al.[CPQ98], Fahrner and Stadtm¨uller[FS98]and Berkes and Cs´aki[BC01]considered the ASCLT for the maximum of i.i.d. random variables. An influential work is Cs´aki and Gonchigdanzan[CG02], which proved an almost sure limit theorem for the maximum of stationary weakly dependent sequence.
Theorem A.LetX1,X2,· · ·be a standardized stationary Gaussian sequence withrn=C ov(X1,Xn+1) satisfying rnlogn(log logn)1+ǫ = O(1) as n
→ ∞. Let Mk = maxi≤kXi. If an = (2 logn)1/2, bn= (2 logn)1/2−1
2(2 logn)
−1/2(log logn+log(4π)), then
lim
n→∞
1 logn
n
X
k=1 1
kI(ak(Mk−bk)≤x) =exp(−e
−x) a.s., (1)
whereI is indicator function.
Shouquan Chen and Zhengyan Lin[CL06]extended the results in[CG02]to the non-stationary case.
Leadbetter et al[LLR83]showed the following theorem.
Theorem B.LetX1,X2,· · ·be a standardized stationary Gaussian sequence withrn=C ov(X1,Xn+1) and Mn = max1≤i≤nXi. Let an = (2 logn)1/2 and b
n = (2 logn)1/2− 12(2 logn)−1/2(log logn+
1RESEARCH SUPPORTED BY THE SCIENTIFIC RESEARCH FUND OF SICHUAN UNIVERSITY OF SCIENCE &
ENGI-NEERING UNDER GRANT 2007RZ014.
log(4π)). Ifrnlogn→r>0, then
lim
n→∞P
an(Mn−bn)≤x
=
Z∞
−∞
exp−e−x−r+p2rzφ(z)dz, (2)
where and in the sequelφis standard normal density.
In the paper, we consider the ASCLT version of (2). The theorem below is useful in our proof. Theorem C. [Leadbetter et al., 1983, Theorem 4.2.1, Normal Comparison Lemma] Suppose X1,X2,· · ·,Xnare standard normal variables with covariance matrixΛ1= (Λ1
i j), andY1,Y2,· · ·,Yn
similarly with covarianceΛ0= (Λ0
i j), andρi j:=max(|Λ1i j|,|Λ0i j|), assuming that maxi6=jρi j=:δ <
1. Further, letu1,· · ·,unbe real numbers. Then
|P(Xj≤uj,j=1,· · ·,n)−P(Yj≤uj,j=1,· · ·,n)|
≤K1 X
1≤i<j≤n
|Λ1i j−Λ0i j|exp 
− u 2
i +u
2
j
2(1+ρi j)
(3)
with some positive constantK1depending only onδ.
Throughout this paper,ξ1,ξ2,· · · is stationary dependent Gaussian sequence andMn=max1≤i≤nξi,
Mk,n=maxk+1≤i≤nξi. Letrn=C ov(ξ1,ξn+1). If
rnlogn→r≥0, asn→ ∞. (4)
ξ1,ξ2,· · · was called as dependent: weakly dependent forr=0 and strongly dependent forr>0. Let
ρn= r
logn,rdefined in (4). (5)
In the paper, a very natural and mild assumption is
|rn−ρn|logn(log logn)1+ǫ=O(1). (6) We mainly consider the ASCLT of the maximum of stationary Gaussian sequence satisfying (4), under the mild condition (6), which is crucial to consider other versions of the ASCLT such as that of the maximum of non-stationary strongly dependent sequence and the function of the maximum. In the sequel,a=O(b)is denoted bya≪b,Cis a constant which may change from line to line. The main result is as follows.
Theorem. Let {ξn} be a sequence of stationary standard Gaussian random variables with co-variances ri j =r|j−i| satisfying (4). Mk =maxi≤kξi. The definitions of an, bnis the same as in Theorem A. Assumeri j=r|j−i|satisfies (6). Then
lim
n→∞
1 logn
n
X
k=1 1 kI
ak(Mk−bk)≤x
=
Z+∞
−∞
exp−e−x−r+p2rzφ(z)dz a.s.. (7)
Remark 1. Whenr=0, clearly, Theorem induces Theorem A. Whenr>0,ξ1,ξ2,· · · is strongly dependent. We mainly focus on the proof of Theorem 1 for this case.
2
Auxiliary lemmas
In this section, we present and prove some lemmas which are useful in our proof of the main result.
Lemma 2.1. Assume|rn−ρn|logn(log logn)1+ǫ=O(1). Let the constantsu
nbe such thatn(1−
Φ(un))is bounded whereΦis standard normal distribution function. Then
sup
Then it is sufficient to prove (8) for the sequence{vn}. Using a usual fact 1−Φ(x)∼φ(x)
x ,x→ ∞, (11)
we can write that
exp− v
Using (12), we have
Since 1+α−2/(1+σ(0))<0, we knowT1≪n−δ, for someδ >0, uniformly for 1≤k≤n. For
to the proof of Lemma 6.4.1 in Leadbetter et al.[LLR83], it can be shown
nk
Consider the first term on the right-hand side, using (6), we have
1
According to Leadbetter et al.[LLR83](page 135), we can write
1
(9) does similarly. The proof is completed.
Lemma 2.2.Letξe1,ξe2,· · ·,ξenbe standard stationary Gaussian variables with constant covariance ρn=r/lognandξ1,ξ2,· · ·,ξnsatisfy the conditions of the Theorem. DenoteMen=maxi≤nξeiand
Mn=maxi≤nξi. Assumen(1−Φ(un))is bounded and (6) is satisfied. Then
|E(I(Mn≤un)−I(Men≤un))| ≪(log logn)−(1+ǫ) (20)
Lemma 2.3.Letη1,η2,· · · be a sequence of bounded random variables. If
Var Xn
k=1 1 kηk
≪log2n(log logn)−(1+ǫ)for someǫ >0, (21)
then
lim
n→∞
1 logn
n
X
k=1 1
k(ηk−Eηk) =0 a.s. (22)
Proof.The proof can be found in Cs´aki and Gonchigdanzan[CG02].
3
Proof of main result
The proof of Theorem. When an = (2 logn)1/2, bn = (2 logn)1/2− 12(2 logn)−1/2(log logn+
log(4π)), we haveun=x/an+bnsatisfyingn(1−Φ(un))<C. Under the assumptions, we firstly
show
lim
n→∞
1 logn
n
X
k=1 1
k(I(Mk≤uk)−P(Mk≤uk)) =0 a.s. (23) Using Lemma 2.3, it is sufficient to prove
Var Xn
k=1 1
kI(Mk≤uk) 
≪log2n(log logn)−(1+ǫ)for someǫ >0. (24)
Let ζ,ζ1,ζ2,· · · be independent standard normal variables. Obviously (1−ρk)1/2ζ 1+ρ
1/2
k ζ,
(1−ρk)1/2ζ2+ρk1/2ζ,· · · have constant covarianceρk=r/logk. Define
Mk(ρk) = max
1≤i≤k((1−ρk)
1/2ζ
i+ρ1 /2
k ζ) = (1−ρk)1/2max(ζ1,ζ2,· · ·,ζk) +ρ1 /2
k ζ
=:(1−ρk)1/2Mk(0) +ρ1k/2ζ.
Using the well-knownc2−inequality, the left-hand side of (24) can be written as
Var Xn
k=1 1
kI(Mk≤uk)−
n
X
k=1 1
kI(Mk(ρk)≤uk) +
n
X
k=1 1
kI(Mk(ρk)≤uk) 
≤2 
Var Xn
k=1 1
kI(Mk(ρk)≤uk) 
+Var Xn
k=1 1
kI(Mk≤uk)−
n
X
k=1 1
kI(Mk(ρk)≤uk) 
Combining (25), (26) and (27), we can getL1≪log2n(log logn)−(1+ǫ). Fori=2, write L2as
Var Xn
k=1 1
k(I(Mk(ρk)≤uk)−I(Mk≤uk)) 
≤E Xn
k=1 1
k(I(Mk(ρk)≤uk)−I(Mk≤uk)) 2
=E Xn
k=1 1
k2(I(Mk(ρk)≤uk)−I(Mk≤uk))2
+2 X 1≤i<j≤n
|E((I(Mi(ρi)≤ui)−I(Mi≤ui))(I(Mj(ρj)≤uj)−I(Mj≤uj)))|
i j
=:J1+J2. (28)
ObviouslyJ1<∞. To estimateJ2, using Lemma 2.2, we have
|E((I(Mi(ρi)≤ui)−I(Mi≤ui))(I(Mj(ρj)≤uj)−I(Mj≤uj)))|
≤ |E(I(Mj(ρj)≤uj)−I(Mj≤uj))| ≪(log logj)−(1+ǫ). So
J2≪
n
X
j=3
1 j(log logj)1+ǫ
j−1 X
i=1 1 i ≪
n
X
j=3
logj j(log logj)1+ǫ
≪logn
n
X
j=3
1
j(log logj)1+ǫ ≪(logn)
2(log logn)−(1+ǫ). (29)
Combining (28) and (29) induces L2≪log2n(log logn)−(1+ǫ).
Secondly, according to Leadbetter et al.[LLR83](page 136), we have P{an(Mn− bn) ≤ x} →
R∞
−∞exp(−e−
x−r+p2rz)φ(z)dz, asn→ ∞. Clearly this induces
1 logn
n
X
k=1 1
kP(Mk≤uk)→ Z∞
−∞
exp(−e−x−r+p2rz)φ(z)dz a.s.,
asn→ ∞. The conclusion follows.
Acknowledgements The author would like to thank the referee for his careful reading of the manuscript and many constructive comments which improving the paper greatly and the editor for his careful working.
References
[BC01] I. Berkes and E. Cs´aki. A universal result in almost sure central limit theory. Stochastic Process. Appl. 94, 105-134, 2001. MR1835848
[CL06] S. Chen and Z. Lin. Almost sure max-limits for nonstationary Gaussian sequence, Statist. Probab. Lett. 76, 1175-1184, 2006. MR2269290
[CPQ98] S. Cheng, L. Peng and Y. Qi. Almost sure convergence in extreme value theory, Math. Nachr. 190, 43-50, 1998. MR1611675
[FS98] I. Fahrner and U. Stadtm¨uller. On almost sure max-limit theorems, Statist. Probab. Lett. 37, 229-236, 1998. MR1614934