• Tidak ada hasil yang ditemukan

getdoc41e2. 222KB Jun 04 2011 12:04:18 AM

N/A
N/A
Protected

Academic year: 2017

Membagikan "getdoc41e2. 222KB Jun 04 2011 12:04:18 AM"

Copied!
27
0
0

Teks penuh

(1)

El e c t ro n ic

Jo ur

n a l o

f P

r o

b a b il i t y

Vol. 14 (2009), Paper no. 83, pages 2391–2417. Journal URL

http://www.math.washington.edu/~ejpecp/

CLT for Linear Spectral Statistics of Wigner matrices

Zhidong Bai

Xiaoying Wang

Wang Zhou

§

Abstract

In this paper, we prove that the spectral empirical process of Wigner matrices under sixth-moment conditions, which is indexed by a set of functions with continuous fourth-order deriva-tives on an open interval including the support of the semicircle law, converges weakly in finite dimensions to a Gaussian process.

Key words:Bernstein polynomial, central limit theorem, Stieltjes transform, Wigner matrices. AMS 2000 Subject Classification:Primary 15B52, 60F15; Secondary: 62H99.

Submitted to EJP on July 17, 2009, final version accepted October 14, 2009.

The first two authors were partially supported by CNSF 10871036 and NUS grant R-155-000-079-112 and the third author was partially supported by grant R-155-000-076-112 at the National University of Singapore

School of Mathematics and Statistics, Northeast Normal University, Changchun, 130024, P.R.C. Email: baizd@nenu.edu.cn

School of Mathematics and Statistics, Northeast Normal University, Changchun, 130024, P.R.C. Email: wangxy022@126.com

§Department of Statistics and Applied Probability, National University of Singapore,117546, Singapore. Email:

(2)

1

Introduction and Main result

The random matrices theory originates from the development of quantum mechanics in the 1950’s. In quantum mechanics, the energy levels of a system are described by eigenvalues of a Hermitian operator on a Hilbert space. Hence physicists working on quantum mechanics are interested in the asymptotic behavior of the eigenvalues and from then on, the random matrices theory becomes a very popular topic among mathematicians, probabilitists and statisticians. The leading work, the famous semi-circle law for Wigner matrices was found in[19].

A real Wigner matrix of sizenis a real symmetric matrixWn= (xi j)1i,jnwhose upper-triangle en-tries(xi j)1≤ijnare independent, zero-mean real-valued random variables satisfying the following moment conditions:

(1).i, E|xii|2=σ2>0; (2).∀i< j, E|xi j|2=1.

The set of these real Wigner matrices is called the Real Wigner Ensemble (RWE).

A complex Wigner matrix of size n is a Hermitian matrix Wn whose upper-triangle entries

(xi j)1≤ijn are independent, zero-mean complex-valued random variables satisfying the follow-ing moment conditions:

(1).∀i, E|xii|2=σ2>0; (2).∀i< j, E|xi j|2=1, andExi j2 =0.

The set of these complex Wigner matrices is called the Complex Wigner Ensemble (CWE).

The empirical distribution Fn generated by the n eigenvalues of the normalized Wigner matrix

n−1/2Wn is called the empirical spectral distribution (ESD) of Wigner matrix. The semi-circle law states thatFna.s. converges to the distributionF with the density

p(x) = 1

2π

p

4−x2, x [2, 2].

Its various versions of convergence were later investigated. See, for example,[1],[2].

Clearly, one method of refining the above approximation is to establish the rate of convergence, which was studied in[3],[10],[12],[13],[18],[5]and[8]. Although the exact convergence rate remains unknown for Wigner matrices, Bai and Yao[6]proved that the spectral empirical process of Wigner matrices indexed by a set of functions analytic on an open domain of the complex plane including the support of the semi-circle law converges to a Gaussian process under fourth moment conditions.

To investigate the convergence rate of the ESD of Wigner matrix, one needs to use f as step func-tions. However, many evidences show that the empirical process associated with a step function can not converge in any metric space, see Chapter 9 of[8]. Naturally, one may ask whether it is possible to derive the convergence of the spectral empirical process of Wigner matrices indexed by a class of functions under as less assumption on the smoothness as possible. This may help us to have deeper understanding on the exact convergence rate of ESD to semi-circle law.

(3)

fourth-order derivatives, whereU is an open interval including the interval[2, 2], the support of

F(x). The empirical processGn¬{Gn(f)}indexed byC4(U)is given by

Gn(fn

Z ∞

−∞

f(x)[FnF](d x), fC4(U) (1.1)

In order to give a unified treatment for Wigner matrices, we define the parameterκwith values 1 and 2 for the complex and real Wigner matrices respectively. Also setβ=E(|x12|2−1)2−κ. Our

main result is as follows.

Theorem 1.1. Suppose

E|xi j|6M for all i,j. (1.2)

Then the spectral empirical process Gn ={Gn(f): f C4(U)}converges weakly in finite dimensions to a Gaussian process G:={G(f): fC4(U)}with mean function

EG(f) =κ−1

4 [f(2) + f(−2)]−

κ−1

2 τ0(f) + (σ

2

κ)τ2(f) +βτ4(f).

and covariance function

c(f,g) ¬ E[{G(f)EG(f)}{G(g)EG(g)}]

= 1

4π2

Z 2

−2

Z 2

−2

f′(t)g′(s)V(t,s)d t ds

where

V(t.s) =

σ2−κ+1

2βts

p

(4t2)(4s2)

+κlog

 

4ts+p(4t2)(4s2)

4−ts−p(4−t2)(4−s2)

 

and

τl(f) = 1 2π

Z π

π

f(2cosθ)eilθdθ= 1

2π

Z π

π

f(2cosθ)cos()

= 1

π

Z 1

−1

f(2t)Tl(t)p 1

1t2 d t.

Here{Tl,l 0}is the family of Chebyshev polynomials.

(4)

ideal assumptions. Therefore, the quantum physicists may want to test the validity of the ideal as-sumptions. Therefore, they may suitably select one or several f’s so that θ’s may characterize the semicircular law. Using the limiting distribution of Gn(f) =n( ˆθθ), one may perform statistical test of the ideal hypothesis. Obviously, one can not apply the limiting distribution ofn( ˆθ−Eθˆ)to the above test.

Remark 1.3. Checking the proof of Theorem 1.1, one finds that the proof still holds under the assumption of bounded fourth moments of the off-diagonal entries is finite, if the approximation

Gn(fm) ofGn(f) is of the desired order. Thus, the assumption of 6-th moment is only needed in deriving the convergence rate ofkFnFk. Furthermore, the assumption of the fourth derivative of f is also related to the convergence rate of kFnFk. If the convergence rate of kFnFk is improved and/or proved under weaker conditions, then the result of Theorem 1.1 would hold un-der the weakened conditions. We conjecture that the result would be true if we only assume the fourth moments of the off-diagonal elements of Wigner matrices are bounded and f have the first continuous derivative.

Pastur and Lytova in[17]studied asymptotic distributions ofnR f(x)d(Fn(x)EFn(x))under the conditions that the fourth cumulant of off-diagonal elements of Wigner matrices being zero and the Fourier transform of f(x) having 5th moment, which implies that f has the fifth continuous derivative. Moreover, they assume f is defined on the whole real line. These are stronger than our conditions.

Remark 1.4. The strategy of the proof is to use Bernstein polynomials to approximate functions in

C4(U). This will be done in Section 2. Then the problem is reduced to the analytic case, which has been intensively discussed in Bai and Yao[6]. But the functions in[6]are independent ofnand thus one can choose fixed contour and then prove that the Stieltjes transforms tend to a limiting process. In the present case, the Berstein polynomials depend onnthrough increasing degrees and thus they are not uniformly bounded on any fixed contour. Therefore, we cannot simply use the results of Bai and Yao[6]and thus we have to choose a sequence of contours approaching to the real axes so that the approximating polynomials are uniformly bounded on the corresponding contours. On this sequence of contours, the Stieltjes transforms don’t have a limiting process. Our Theorem 1.1 cannot follow directly from[6]. We have to find alternative ways to prove the CLT.

Remark 1.5. It has been also found in literature that the so-called secondary freeness is proposed and investigated in free probability. Readers of interest are referred to Mingo [14; 15; 16]. We shall not be much involved in this direction in the present paper. As a matter, we only comment here that both the freeness and the secondary freeness are defined on sequences of random matrices (or ensembles) for which the limits of expectations of the normalized traces of all powers of the random matrices and the limits of covariances of unnormalized traces of powers of the random matrices exist. Therefore, for a single sequence of random matrices, the existence of these limits has to be verified and the verification is basically equivalent to moment convergence method. Results obtained in[14]is in some sense equivalent to those of[17].

(5)

2

Bernstein polynomial approximation

It is well-known that if ˜f(y)is a continuous function on the interval[0, 1], the Bernstein polynomials

˜

fm(y) =

m

X

k=0

m

k

yk(1y)mkf˜

k

m

converge to ˜f(y)uniformly on[0, 1]asm→ ∞. Suppose ˜f(y)C4[0, 1]. Taylor expansion gives

˜

f

k

m

= ˜f(y) +

k

my

˜

f′(y) +1

2

k

my

2

˜

f′′(y)

+1

3!

k

my

3

˜

f(3)(y) + 1

4!

k

my

4

˜

f(4)(ξy).

Hence

˜

fm(y)−f˜(y) =

y(1y)f˜′′(y)

2m +O

1

m2

. (2.1)

For the functionf C4(U), there existsa>2 such that[a,a]⊂ U. Make a linear transformation

y = K x+ 1

2, ε ∈(0, 1/2), whereK = (1−2ε)/(2a), then y ∈[ε, 1−ε]if x ∈[−a,a]. Define

˜

f(yf(K−1(y1/2)) = f(x), y [ε, 1ε], and define

fm(x)¬ ˜fm(y) =

m

X

k=1

m

k

yk(1−y)mkf˜

k

m

.

From (2.1), we have

fm(x)f(x) = f˜m(y) ˜f(y) = y(1− y)

˜

f′′(y)

2m +O

1

m2

.

Since ˜h(yy(1 y)f˜′′(y) has a second-order derivative, we can use Bernstein polynomial ap-proximations once again to get

˜

hm(y)−˜h(y) = m

X

k=1

m

k

yk(1− y)mk˜h(k

m)−˜h(y)

= O

1

m

.

So, withhm(x) =˜hm(K x+1

2),

f(x) = fm(x)− 1

2mhm(x) +O

1

m2

(6)

Therefore,Gn(f)can be split into three parts:

Gn(f) = n

Z ∞

−∞

f(x)[FnF](d x)

= n

Z

fm(x)[FnF](d x) n

2m

Z

hm(x)[FnF](d x)

+n

Z

f(x)fm(x) + 1 2mhm(x)

[FnF](d x)

= ∆1+ ∆2+ ∆3. For∆3, by Lemma 6.1 given in Appendix,

kFnFk=Op(n−2/5).

Here and in the sequel, the notationZn =Op(cn) means that for anyε > 0, there exists anM >0 such that supnP(|Zn| ≤ M cn) < ε. Similarly, Zn = op(cn) means that for any ε >0, limnP(|Zn| ≤

εcn) =0.

Takingm2= [n3/5+ε0], for someε

0>0 and using integration by parts, we have that

3 = n

Z

f(x)fm(x) + 1

2mhm(x)

Fn(x)F(x)

d x

= Op(nε0)

since f(x) fm(x) +21mhm(x)

=O(m−2). From now on we choose ε0 = 1/20, and thenm= n13/40.

Note that fm(x)andhm(x)are both analytic. Following from the result proved in Section 5, replac-ing fm byhm, we obtain that

2=O(∆1)

m =op(1).

It suffices to consider ∆1 = Gn(fm). In the above, fm(x) and gm(y) are only defined on the real line. Clearly, the two polynomials can be considered as analytic functions on the complex regions

[a,a]×[ξ,ξ]and[ε, 1ε]×[,], respectively.

Since gC4[0, 1], there is a constant M, such that|g(y)|< M, y [ε, 1ε]. Noting that for

(u,v)[ε, 1ε]×[,],

|u+i v|+|1ui v| = pu2+v2+p(1u)2+v2

u

–

1+ v

2

2u2

™

+ (1u)

–

1+ v

2

2(1u)2

™

≤ 1+ v

2

ε

we have, for y=K x+1/2=u+i v,

|fm(x)| = |f˜m(y)|=

n

X

k=1

m

k

yk(1y)mkf˜

k

m

M

‚

1+ v

2

ε

Œm

(7)

If we take |ξ| ≤ K/pm, then |f˜m(y)| ≤ M

€

1+K2/(m M eK2, as m → ∞. So ˜fm(y) is bounded when y ∈ [ε, 1−ε]×[K/pm,K/pm]. In other words fm(x) is bounded when x

[a,a]×[1/pm, 1/pm]. Let γm be the contour formed by the boundary of the rectangle with vertices(±a±i/pm). Similarly, we can show thathm(x), fm′(x)andhm(x)are bounded onγm.

3

Simplification by truncation and Preliminary formulae

3.1

Truncation

As proposed in [9], to control the fluctuations around the extreme eigenvalues under condi-tion (1.2), we will truncate the variables at a convenient level without changing their weak limit.

Condition (1.2) implies the existence of a sequenceηn↓0 such that

1

n2η4

n

X

i,j

E

h

|xi j|4I|x i j|≥ηnpn

i

=o(1).

We first truncate the variables as ˆxi j = xi jI|xi j|≤ηnpn and normalize them to ˜xi j = ( ˆxi j−Eˆxi j)/si j,

wheresi j is the standard deviation of ˆxi j.

LetFˆn and ˜Fndenote the ESDs of the Wigner matricesn−1/2( ˆxi j)andn−1/2(x˜i j), andGˆnand ˜Gnthe corresponding empirical process, respectively. First of all, by (1.2),

P(Gn6= ˆGn) ¶ P(Fn= ˆ6 FnP( ˆxi j6= ˜xi j)

¶ X

i,j

P(|xi j| ≥ηn p

n)

¶ (ηn p

n)−4X

i,j

E|xi j|4I|x

i j|≥ηnpn=o(1).

Second, we compare ˜Gn(fm)andGˆn(fm). Note that

max

i,j |1−si j|

¶ max i,j |1−s

2

i j|

= max i,j

h

E(|xi j|2I|x

i j|≥ηnpn) +|E(xi jI|xi j|≥ηnpn)| 2i

≤ (n−1ηn2+n6n−3)max i,j [

E(|xi j|4I|x i j|≥ηn

p

n)]→0.

Therefore, there exist positive constantsM1 andM2so that

X

i,j

E(|xi j|2|1−si j1|

2)

M1

X

i,j

(1−s2i j)2 M2

η4

nn2

X

i,j

E(xi j4I|x

(8)

Since f(x)C4(U)implies that fm(x)are uniformly bounded in x[a,a]andm, we obtain

E|G˜n(fm)Gˆn(fm)|2 ME

  

n

X

j=1

|λ˜n jλˆn j|

  

2

M nE

n

X

j=1

|λ˜n jλˆn j|2

= M nEX

i,j

|n−1/2(x˜i j−ˆxi j)|2

M

  

X

i,j

(E|xi j|2)|1si j1|2+X

i,j

|E( ˆxi j)|2si j2

  

= o(1),

where ˜λn j andλˆn j are the jth largest eigenvalues of the Wigner matricesn−1/2(˜xi j)andn−1/2( ˆxi j), respectively.

Therefore, the weak limit of the variables(Gn(fm))is not affected if we substitute the normalized truncated variables ˜xi j for the original xi j.

From the normalization, the variables ˜xi j all have mean 0 and the same absolute second moments as the original variables. But for the CWE, the conditionExi j2 =0 does no longer remain true after these simplifications. Fortunately, we have the estimateE˜xi j2 = o€n−1η2nŠ, which is good enough for our purposes.

For brevity, in the sequel we still use xi j to denote the truncated and normalized variables ˜xi j.

3.2

Preliminary formulae

Recall that for a distribution functionH, its Stieltjes transformsH(z)is defined by

sH(z) =

Z

1

xzd H(x), z∈C.

The Stieltjes transforms(z)of the semicircle law F is given bys(z) =1

2(z−sgn(Im(z))

p

z24)

forz withIm(z) 6=0 which satisfies the equations(z)2+zs(z) +1=0. Here and throughout the

paper,pzof a complex numberzdenotes the square root ofzwith positive imaginary part.

Define D= (n−1/2Wnz In)−1. Letαk be the kth column ofWn with xkk removed andWn(k)the submatrix extracted fromWn by removing itskth row andkth column. DefineDk= (n−1/2Wn(k)

(9)

following notations:

βk(z) = − 1 p

nxkk+z+n

−1α

kDkαk,

δn(z) = − 1

n

n

X

j=1

ǫk(z)

βk(z) ,

ǫk(z) = 1 p

nxkk

1

kDkαk+Esn(z),

qk(z) = p1

nxkk

1

n

”

αkDk(z)αkt r Dk(z)

—

,

ωk(z) = p1

nxkk+n

−1α

kDkαks(z),

bn(z) = n[Esn(z)s(z)].

The Stieltjes transformsn(z)ofFnhas the representation

sn(z) = 1

nt r D=

1

nt r

W

n p

nz In

−1

= 1

n

n

X

k=1

1

βk(z)

= 1

z+Esn(z)+

δn(z)

(z+Esn(z)). (3.2)

Throughout this paper,M may denote different constants on different occassions andεna sequence of numbers which converges to 0 asngoes to infinity.

4

The mean function

Letλe x t(n−1/2Wn)denote both the smallest and the largest eigenvalue of the matrixn−1/2Wn (de-fined by the truncated and normalized variables). Forη0<(a−2)/2, define

An={|λe x t(n−1/2Wn)| ≤2+η0}.

Then, by Cauchy integral we have

1=Gn(fm) = 1

2πi

Z I

γm fm(z)

zxn[FnF](d x)dz IAn

+

Z

fm(x)n[FnF](d x)IAc n.

SinceP(Acn) =o(nt)for anyt>0 (see the proof of Theorem 2.12 in[4]),

Z

(10)

in probability. It suffices to consider

∆ = 1

2πi

I

γm

fm(z)n[sn(z)−s(z)]IAndz. (4.3)

In the remainder of this section, we will handle the asymptotic mean function of∆. The convergence of random part of∆will be given in Section 5.

To this end, writebn(z) =n[Esn(z)s(z)]andb(z)¬[1+s′(z)]s3(z)[σ21+(κ1)s(z)+βs2(z)].

Then we prove

E∆h = 1

2πi

I

γmh

fm(z)b(z)dz

− 1 2πi

I

γmh

fm(z)[n(Esn(z)IAns(z))−b(z)]dz+o(1)

¬ R1+R2+o(1) (4.4)

v = 1

2πi

I

γmv

fm(z)n[sn(z)−s(z)]IAndz=op(1), (4.5)

where and in the sequelγmhdenotes the union of the two horizontal parts ofγmandγmvthe union of the two vertical parts. The limit ofR1 is given in the following proposition.

Proposition 4.1. R1 tends to

EG(f) = κ−1

4 [f(2) + f(−2)]−

κ−1

2 τ0(f) + (σ

2

κ)τ2(f) +βτ4(f)

for both the CWE and RWE.

Proof. Since fm(z)are analytic functions, by[6], we have

R1 = 1

2πi

I

γmh

fm(z)b(z)dz

κ−1

4 [fm(2) +fm(−2)]−

κ−1

2 τ0(fm) + (σ

2

κ)τ2(fm) +βτ4(fm),

whereabstands for a/b→1 asn→ ∞,

τl(fm) = 1 2π

Z π

π

fm(2cosθ)eilθdθ.

As fm(t)→ f(t)uniformly ont∈[−a,a]as m→ ∞, it follows that

R1−→

κ−1

4 [f(2) + f(−2)]−

κ−1

2 τ0(f) + (σ

2

(11)

Since fm(z)is bounded on and inγm, in order to prove R2→0 asm→ ∞, it is sufficient to show

where we have used Lemma 6.1. This implies

(12)

where we have used|t r D(z)t r Dk(z)|¶1/v,E|t r D(z)E t r D(z)|2lcn−2lv−4l(∆ +v)l (l 1)

which is derived from Lemma 6.2.

Fact 3.

From (3.2) we have

(13)

where theo(1)is uniform forzγmh.

Finally we deal with

(14)

By the proof of (4.3) in[7], we have

Hence we neglect this item. In order to estimate the expectation in (4.11), we introduce the notationαk= (x1k,x2k, ...,xk−1k,xk+1k, ...,xnk)∗¬(xi)and Dk(z)¬(di j). Note thatαk andDk(z)

Here we need to consider the difference between the CWE and RWE.

For the RWE, all the original and truncated variables have the properties: Exi j = 0 andE|xi j|2 = which allow us to have the following unified expression,

(15)

Hence

Eǫ2k(z) = σ

2 n +

1

n2

κE[t r D2k(z)] +βE  

X

i

dii2

 +o

‚

1

nv2η4

n

Œ +

M

n2v3,

which combined with

E

1

nt r D 2

k(z)−s′(z)

2

¶ 1

v4(kF

k

n−1−Fnk+kFnFk)2

M

v4

‚

1

n+

1

n25−η

Œ2

M n−203+2η

and

E|dii2s2(z)| ¶ E|[Dk(z)]ii+s(z)||[Dk(z)]iis(z)|

M

1

v +1

E|[Dk(z)]iis(z)|M v

”

E|[Dk(z)]iis(z)|2—1/2

M

v

1

nv4

1

2

=pM

nv3,

gives

|S2+s2(z)[σ2+κs′(z) +βs2(z)]|pM nv3.

Summarizing the estimates ofSi,i=1, ..., 4, we have obtained that

|nEδn(z) +s2(z)[σ2−1+ (κ−1)s′(z) +βs2(z)]|¶

M

p

nv3

Note that

bn(z) = n[Esn(z)s(z)] = 2sgn(

Im(z))nEδn(z)

p

z24(1Eδ

n(z)) +

p

z24

= sgn(

Im(z))nEδn(z)

p

z24

(1+o(1)).

and that

s′(z) = s(z)

z+2s(z) =−

s(z)

sgn(Im(z))pz24

.

The second equation is equivalent to

sgn((z))

p

z24

=s(z)(1+s′(z)).

From the above two equations, we conclude

(16)

Then, (4.6) is proved by noticing that|bn(z)bnA(z)| ≤nv−1P(Acn)o(nt)for any fixed t. Now, we proceed the proof of (4.5). We shall find a setQn such thatP(Qnc) =o(n−1)and

I

γmv

fm(z)n(sn(z)IQ

ns(z))dz=op(1). (4.12)

By continuity of s(z), there are positive constants Ml and Mu such that for all z γmv, Ml ≤ |z+s(z)| ≤Mu. DefineBn={min

k |βk(z)|

IA

n> ε}, whereε=Ml/4. So

P(Bnc) = P(min k |βk(z)|

IA

nε

n

X

k=1

P(|βk(z)|IAnε)

n

X

k=1

P(|βk(z)−zs(z)|IAn¾ε) +nP(A

c n)

¶ 1

ε4

n

X

k=1 E

|βk(z)−zs(z)|4IAn

+nP(Acn)

Mh 1 n2E|xkk|

4+ 1 n4E

|αkDk(z)αkt r Dk(z)|4IAn

+E

|1

nt r Dk(z)−s(z)| 4I

An

i

+nP(Acn).

LetAnk={|λe x t(n−1/2Wnk)| ≤2+η0}. It is trivially obtained thatAnAnk(see[11]). Noting the independence ofIA

nk andαk, we have

n−4E

|αkDk(z)αkt r Dk(z)|4IAn

n−4E

[Ek|α

kDk(z)αkt r Dk(z)|4]IAnk

M n−4E

[ E|x12|4t r Dk(z)Dk(z)2

+E|x12|8t r Dk(z)Dk(z)2

]IA nk

M n−2,

whereEkdenotes the expectation taken only aboutαk. Therefore

P(Bcn)M n−2+n−2+ (n−2/5+η)4

+nP(Acn)M n−8/5+4η.

This givesP(Bcn) =o(n−1)as n→ ∞. DefineQn¬AnBn, we haveP(Qn)→1, asn→ ∞. Similar to (4.7), we have forzγmv,

Esn(z)IQ n=−

P(Qn)

z+Esn(z)IQ n

+ δnQ(z)

z+Esn(z)IQ n

, (4.13)

where

δnQ(z) = 1

n

n

X

k=1 EǫkQ

(z)IQ n

βk(z)

ǫkQ(z) = 1

pnxkk 1

(17)

Similar to (4.7), we have

Esn(z)IQ n=−

1 2

zsgn(Im(z))

Æ

z24(P(Q

n)−δnQ(z))

.

Therefore,

bnQ(z) = n(Esn(z)IQns(z))

= 2n[P(Q

c

n) +δnQ(z)]

sgn(Im(z))[pz24(P(Qn)δnQ(z)) +pz24]

.

We shall prove that bnQ(z)is uniformly bounded for all zγmv. Noticing that|z2−4|>(a−2)2 and the factP(Qcn) =o(1), we only need to show that

nδnQ(z) is uniformly bounded forzγmv. (4.14)

Rewrite

nδnQ(z) = n

X

k=1

EǫkQIQn

z+Esn(z)IQ n

+

n

X

k=1

Eǫ2

kQIQn

βk(z)[z+Esn(z)IQn]

.

At first we have

EǫkQIA nk =

1

nE

h

t r D(z)IQ

nP(Ank)−t r Dk(z)IAnk

i

= 1

nE

I

Qn

βk

P(Ank)t r Dk(z)(IQc

nAnk+IQnP(A

c nk))

=O(1/n).

Here, the result follows from facts that 1/βk is uniformly bounded when Qn happens, 1nDk(z) is bounded whenAnk happens and P(Qcn) = o(1/n). From this and the facts that Esn(z)IQns(z)

uniformly onγmv and

E|ǫkQ|IQc nAnk

1

nE

h

t r D(z)IQ nP(Q

c

nAnk)−t r Dk(z)IQc nAnk

i

= O(1/n),

we conclude that the first term in the expansion ofnδnQ(z)is bounded.

By similar argument, one can prove that the second term of the expansion ofnδnQ(z)is uniformly bounded onγmv. Therefore, we have

Proposition 4.2. Hγ

mv fm

(z)n(Esn(z)IQns(z))dz→0in probability as n→ ∞.

Therefore, to complete the proof of (4.5), we only need to show that

Proposition 4.3. H γmv fm

(z)n(sn(z)IQn−Esn(z)IQn)dz→0in probability as n→ ∞.

(18)

5

Convergence of

E

LetFk= (xi j,k+1i,jn)for 0knandEk(·) =E(·|Fk). Based on the decreasing filtration

(Fk), we have the following well known martingale decomposition

n[sn(z)−Esn(z)] =t r D(z)−Et r D(z)

At first, we note that

P(R336=0)≤P(

[

Acnk)P(Acn)0.

Next, we note thatR32is a sum of martingale differences. Thus, we have

(19)

WhenzγmvandAnkhappens, we have|n−1t r Dk(z)Dkz)| ≤η−02. Also,z+n− 1t r D

k(z)→z+s(z) uniformly. Further,|z+s(z)|has a positive lower bound onγmv. These facts, together with (5.15), imply that

R320, in probability.

The proof of Proposition 4.3 is the same as those forR320 andR330. We omit the details. Note that whenzγmh

Eqk(z) =0, E

qk(z)

z+n−1t r Dk(z)

=0.

Taylor expansion of the log function implies

R31 =

1 2πi

n

X

k=1

I

γmh

(Ek1Ek)h qk(z)

z+n−1t r Dk(z)

+O

q

k(z) −zn−1t r Dk(z)

2 i

fm′(z)dz

= 1

2πi

n

X

k=1

I

γmh

Ek1 qk(z) z+n−1t r Dk(z)

fm′(z)dz+op(1)

¬ 1

2πi

n

X

k=1

Ynk+op(1),

where op(1) follows from the following Condition 5.1. Clearly Ynk∈ Fk1 andEkYnk=0. Hence {Ynk,k = 1, 2, ...,n} is a martingale difference sequence and Pnk=1Ynk is a sum of a martingale difference sequence. To save notation, we still useGn(fm)to denote 1

2πi

Pn

k=1Ynkfrom now on. In order to apply CLT toGn(fm), we need to check the following two conditions:

Condition 5.1Conditional Lyapunov condition.For somep>2,

n

X

k=1

Ek|Ynk|p−→0 in Probability.

Condition 5.2. Conditional covariance. Note that Ynk are complex, we need to show that

U2 = Pnk=1EkYnk2 converges to a constant limit to guarantee the convergence to complex normal. For simplicity, we may consider two functions f,gC4(U) and show that C ovk[Gn(fm),Gn(gm)]

converges in probability, where fm andgm denote their Bernstein polynomial approximations. That is,

C ovk[Gn(fm),Gn(gm)]→C(f,g) in probability.

The proof of condition 5.1 withp=3.

n

X

k=1

Ek|Ynk|3=

n

X

k=1 Ek

I

γmh

Ek1 qk(z) z+n−1t r Dk(z)f

m(z)dz

3

(20)

Since fm′(z)is bounded onγmh, it suffices to prove

where we have used the facts that

Then the above gives

n

The proof of Condition 5.2. The conditional covariance is

(21)
(22)

since fm′(t) f′(t), gm′ (t)g′(t)uniformly on[2, 2].

ForQ2, sinces(z1)s(z2)fm′(z1)gm′(z2)is bounded onγmh×γmh, in order to proveQ2→0 in

probabil-ity, it is sufficient to prove thatΓn(z1,z2)converges in probability toΓ(z1,z2)uniformly onγmh×γmh. DecomposeΓn(z1,z2)as

Γn(z1,z2) =

n

X

k=1

Ek[Ek1qk(z1)Ek−1qk(z2)]

=

n

X

k=1 Ek

hx2

kk

n +

1

n2Ek−1[α

kDk(z1)αkt r Dk(z1)]

×Ek1[αkDk(z2)αkt r Dk(z2)]

i

= σ2+ κ

n2

n

X

k=1

X

i,j>k

[Ek1Dk(z1)]i j[Ek−1Dk(z2)]ji

+β

n2

n

X

k=1 EkX

i>k

[Ek1Dk(z1)]ii[Ek1Dk(z2)]ii

¬ σ2+S1+S2.

SinceE|[Dk(z)]iis(z)|2M/(nv4)uniformly onγmh,

E|Ek[Ek1Dk(z1)]ii[Ek−1Dk(z2)]iis(z1)s(z2)| ≤ M nv4

uniformly onγmh×γmh. Hence

E|S2β

2s(z1)s(z2)| ≤

M nv4

In the following, we consider the limit ofS1. As proposed in[6], we use the following decomposi-tion. Let

ej= (0, . . . , 1, . . . , 0)′n1, j=1, 2, . . . ,k1,k+1, . . . ,n

whose jth (or(j−1)th) element is 1, the rest being 0, if j<k(or j>k). Then

Dk−1(z) =p1

nWn(k)−z In−1=

X

i,j6=k 1 p

nxi jeie

jzin−1

z Dk(z) +I=

X

i,j6=k 1 p

nxi jeie

jDk(z)

We introduce the matrix

Dki j=

1

p

n[Wn(k)−δi j(xi jeie

j+xjiejei)]−z In−1

−1

(23)

From the formulaA−1B−1=B−1(AB)A−1, we have

DkDki j=Dki jp1

nδi j(xi jeie

j+xjiejei)Dk.

From the above, we get

z Dk(z) = I+ X

i,j6=k

n−1/2xi jeiejDki j(z)

−X i,j6=k

n−1/2xi jeiejDki j(z)n−1/2δi j(xi jeiej+xjiejei)Dk(z)

= I+ X

i,j6=k

n−1/2xi jeiejDki j(z)s(z)n−3/2

n

X

i6=k

eieiDk(z)

−X i,j6=k

δi j

”

n−1|xi j|2([Dki j(z)]j js(z)) +n−1(|xi j|21)s(z

×eiejDk(z)

−X i,j6=k

n−1δi jx2i j[Dki j]jieiejDk(z).

Therefore

z1X

i,j>k

[Ek1Dk(z1)]i j[Ek1Dk(z2)]ji

= X

i>k

[Ek1Dk(z2)]ii

+n−1/2 X

i,j,l>k

xil[Ek−1Dkil(z1)]l j[Ek−1Dk(z2)]ji

s(z1)n−3/2

n

X

i,j>k

[Ek1Dk(z1)]i j[Ek1Dk(z2)]ji

−X i,j>k

l6=k

δilEk−1

–‚

|xil|21

n s(z1) +

|xil|2

n [Dkil(z1)l ls(z1)]

Œ

[Dk(z1)]i j

™

×[Ek1Dk(z2)]ji

−1

n

X

i,j>k l6=k

δilEk−1x2il[Dkil(z1)]l i[Dk(z1)]l j[Ek−1Dk(z2)]ji

= T1+T2+T∗+T3+T4.

First note that the term T is proportional to the term of the left hand side. We now evaluate the contributions of the remaining four terms to the sumS1.

The termT1→s(z2)inL2since

E

n−2X

k

X

i>k

([Ek1Dk(z2)]iis(z2))

2

M

(24)

by

E|[Dk(z)]iis(z)|2 M

nv4.

The termsT3andT4are negligible. The calculations are lengthy but simple. We omit all the details. T2 can not be ignored. We shall simplify it step by step. SplitT2 as

T2 =

X

i,j,l>k

Ek1pxl j

n[Dkil(z1)]l j[Ek−1Dk(z2)]ji

= X

i,j,l>k

Ek1pxil

n[Dkil(z1)]l j[Ek−1(DkDkil)(z2)]ji

+ X

i,j,l>k

Ek1pxil

n[Dkil(z1)]l j[Ek−1Dkil(z2)]ji

= T2a+T2b.

Again,T2bcan be shown to be ignored. As for the remaining termT2a, we have

ET2a

= n−2 X

i,j,l>k

xil[Ek1Dkil(z1)]l j[Ek1(DkDkil)(z2)]ji

= n−1 X

i,j,l>k

[Ek1Dkil(z1)]l j[Ek1Dkil(z2)δil(x2ileiel+|xil|2elei)Dk(z2)]ji

= J1+J2,

where

E|J1| = n−1 X

i,j,l>k

E|x2il|[Ek1Dkil(z1)]l j[Dkil(z2)]ji[Dk(z2)]l i|

n−1 X

i,j1,,j2,l>k

E|xil|4|[Ek1Dkil(z1)]l j1|2|[Dkil(z2)]j2i|2

×X i,l>k

E|(Dk(z2))l i|21/2

M n1/2v−3.

Hence, the contribution of this term is negligible. Finally, we have

J2 = −

X

i,j,l>k

Ek1|xil| 2

n [Dki j(z1)]l j[Ek−1Dki j(z2)]jl[Dk(z2)]ii

≃ −s(z2)

X

i,j,l>k

Ek11

n[Dkil(z1)]l j[Ek−1DkilDkil(z2)]jl

= nk

n s(z2)

X

j,l>k

(25)

where the last approximation follows from

X

i,j,l>k

[Dkil(z1)]l j[Ek1Dkil(z2)]jl(nk) X

j,l>k

[Ek1Dk(z1)]l j[Ek1Dk(z2)]jl

M n3/2v−3.

Let us define

Xk= X

i,j>k

[Ek1Dk(z1)]i j[Ek1Dk(z2)]ji

Summarizing the estimations of all the termsTi, we have proved that

z1Xk=−s(z1)Xk−(nk)s(z2)Xk

nk n +rk,

where the residual term rk is of order O(pnv−3) uniformly in k = 1, ...,n and z1,z2 ∈ γm. By

z1+s(z1) =1/s(z1), the above identity is equivalent to

Xk= (nk)s(z1)s(z2) + nk

n s(z1)s(z2)Xks(z1)rk.

Consequently,

1

n2

n

X

k=1 Xk =

1

n

n

X

k=1

nk

n s(z1)s(z2) 1nnks(z1)s(z2)+

1

n2

n

X

k=1

s(z1)rk

1 nnks(z1)s(z2),

which converges in probability to

s(z1)s(z2)

Z 1

0

t

1ts(z1)s(z2)d t=−1− s(z1)s(z2)

−1

log 1−s(z1)s(z2)

.

As a conclusion,Γn(z1,z2)converges in probability to

Γ(z1,z2) =σ2−κ

κ

s(z1)s(z2)log 1−s(z1)s(z2)

+1

2βs(z1)s(z2).

The proof of Condition 5.2 is then complete.

Although we have completed the proof of Theorem 1.1, we summarize the main steps of the proof for reader’s convenience.

Proof of Theorem 1.1. In Section 2, the weak convergence ofGn(f)is reduced to that ofGn(fm). Then in Section 4, we show that this is equivalent to the weak convergence of

∆ = 1

2πi

I

γm

fm(z)n[sn(z)s(z)]dz.

(26)

6

Appendix

Lemma 6.1. Under condition(1.2), we have

kEFnFk = O(n−1/2), kFnFk = Op(n−2/5),

kFnFk = O(n−2/5+η) a.s., for any η >0.

This follows from Theorems 1.1, 1.2 and 1.3 in[5].

Lemma 6.2. For X = (x1, ...,xn)T i.i.d. standardized (complex) entries withExi=0andE|xi|2=1, and C is an n×n matrix (complex), we have, for any p¾2

E|XC X t r C|pKp”(E|x1|4t r C C∗)p/2+E|x1|2pt r(C C∗)p/2—.

This is Lemma 2.7 in[7].

References

[1] Arnold, L. (1967). On the asymptotic distribution of the eigenvalues of random matrices.J. Math. Anal. Appl.20

262–268. MR0217833

[2] Arnold, L. (1971). On Wigner’s semicircle law for the eigenvalues of random matrices.Z. Wahrsch. Verw. Gebiete. 19191–198. MR0348820

[3] Bai, Z. D. (1993). Convergence rate of expected spectral distributions of large random matrices. Part I. Wigner matrices.Ann. Probab.21625–648. MR1217559

[4] Bai, Z. D. (1999). Methodologies in spectral analysis of large dimensional random matrices, a review.Stat. Sin.9

611–677. MR1711663

[5] Bai, Z. D., Miao, B. and Tsay, J. (2002). Convergence rates of the spectral distributions of large Wigner matrices.

Int. Math. J.165–90. MR1825933

[6] Bai, Z. D. and Yao, J. (2005). On the convergence of the spectral empirical process of wigner matrices.Bernoulli 111059-1092. MR2189081

[7] Bai, Z. D. and Silverstein, J. W. (1998). No eigenvalues outside the support of the limiting spectral distribution of large dimensional random matrices.Ann. Probab.26316-345.6 MR1617051

[8] Bai, Z. D. and Silverstein, J. W. (2006).Spectral analysis of large dimensional random matrices. Mathematics Mono-graph Series 2, Science Press, Beijing.

[9] Bai, Z. D. and Yin, Y. Q. (1988). Necessary and sufficient conditions for the almost sure convergence of the largest eigenvalue of Wigner matrices.Ann. Probab.161729–1741. MR0958213

[10] Costin, O. and Lebowitz, J. (1995). Gaussian fluctuations in random matrices.Physical Review Letters7569–72.

[11] Horn, R. A. and Johnson, C. R. (1990).Matrix Analysis. Cambridge University Press. MR1084815

(27)

[13] Khorunzhy, A. M., Khoruzhenko, B. A. and Pastur, L. A. (1996) Asymptotic properties of large random matrices with independent entries.J. Math. Phys.375033¨C-5060.

[14] Collins, B., Mingo, J. A., Sniady, P., and Speicher, R. (2007) Second order freeness and fluctuations of random matrices. III. Higher order freeness and free cumulants.Doc. Math.12, 1–70 (electronic). MR2302524

[15] Mingo, J. A., Sniady, P., and Speicher, R. (2007) Second order freeness and fluctuations of random matrices. II. Unitary random matrices.Adv. Math.209, no. 1, 212–240. MR2294222

[16] Mingo, J. A. and Speicher, R. (2006) Second order freeness and fluctuations of random matrices. I. Gaussian and Wishart matrices and cyclic Fock spaces.J. Funct. Anal.235, no. 1, 226–270. MR2216446

[17] Pastur, L. A. and Lytova A. (2009) Central Limit Theorem for linear eigenvalue statistics of random matrices with independent entries.Ann. Prob.37, 1778-1840.

[18] Sinai, Y. and Soshnikov, A. (1998) Central limit theorem for traces of large random symmetric matrices with independent elements.Bol. Soc. Brasil. Mat. (N.S.)291–24. MR1620151

Referensi

Dokumen terkait

Menurut Ramlan Surbakti sebagaimana dikutip oleh Danial (2009), ada beberapa ciri terjadinya ‘Amerikanisasi’ dalam iklan politik: pertama, penggunaan teknologi komunikasi, khususnya

Abstrak : Mata kuliah Fisika Sekolah I berada dalam kelompok mata kuliah MKK yang diberikan pada mahasiswa semester III.. Sebelum perkuliahan tatap muka Kami

Berdasarkan Surat Kuasa Pengguna Anggaran Kegiatan Pengadaan, Peningkatan dan Perbaikan Sarana dan Prasarana Puskesmas &amp; Jaringannya Kota Banjarbaru Nomor : 027/

Dari hasil pengujian hipotesis menemukan bukti bahwa ada dua rasio keuangan (rasio likuiditas dan profitabilitas) dan satu rasio non keuangan (opini audit tahun

In this research study, the author listed five independent variables only that are put under Plaza Indonesia shopping mall’s attributes, which are mall’s environment,

Hal ini menunjukkan bahwa perusahaan dengan menggunakan aktiva lancarnya dapat memenuhi kewajiban-kewajiban perusahaan dalam jangka pendeknya dari pada hutang

ahaan dalam mencatat akuisisi adalah pur- chase method. 4) Perusahaan menggunakan Rupiah (Rp) da- lam laporan keuangannya. 5) Data laporan keuangan dan harga saham

Berdasarkan Berita Acara Hasil Pelelangan penyedia jasa konstruksi nomor : 008/RP/Pokja-|V/ULP-HSSA/1U2011 tanggal 27 Juli 2011 beserta dokumen pendukungnya,.. dengan