• Tidak ada hasil yang ditemukan

getdoc1711. 60KB Jun 04 2011 12:04:08 AM

N/A
N/A
Protected

Academic year: 2017

Membagikan "getdoc1711. 60KB Jun 04 2011 12:04:08 AM"

Copied!
4
0
0

Teks penuh

(1)

Elect. Comm. in Probab. 5(2000) 73–76 ELECTRONIC

COMMUNICATIONS in PROBABILITY

A WEAK LAW OF LARGE NUMBERS

FOR THE SAMPLE COVARIANCE MATRIX

STEVEN J. SEPANSKI

Department of Mathematics, Saginaw Valley State University, 7400 Bay Road University Center, MI 48710

email: sepanski@svsu.edu

ZHIDONG PAN

Department of Mathematics, Saginaw Valley State University, 7400 Bay Road University Center, MI 48710

email: pan@svsu.edu

submitted February 15, 1999Final version accepted March 20, 2000 AMS 1991 Subject classification: Primary 60F05; secondary 62E20, 62H12

Law of large numbers, affine normalization, sample covariance, central limit theorem, domain of attraction, generalized domain of attraction, multivariate t-statistic

Abstract

In this article we consider the sample covariance matrix formed from a sequence of indepen-dent and iindepen-dentically distributed random vectors from the generalized domain of attraction of the multivariate normal law. We show that this sample covariance matrix, appropriately nor-malized by a nonrandom sequence of linear operators, converges in probability to the identity matrix.

1. Introduction:

LetX, X1, X2· · · be iid Rd valued random vectors withL(X) full. The condition of fullness is the multivariate analogue of nondegeneracy and will be in force throughout this article. It means that L(X) is not concentrated on anyd−1 dimensional hyperplane. Equivalently,

hX, θiis nondegenerate for every θ.Hereh, idenotes the inner product.

Throughout this article all vectors inRdare assumed to be column vectors. For any matrix,A, At denotes its transpose. Let ¯Xn = 1

n

Pn

i=1Xi. We denote and define the sample covariance matrix byCn = 1

n

Pn

i=1(Xi−Xn)(Xi¯ −Xn¯ )t.That Cn has a unique nonnegative symmetric square root, denoted above byCn1/2,follows from the fact thathCnθ, θi=Pni=1hXi−Xn, θ¯ i2≥ 0,so that Cn is nonnegative. Also,Cn is clearly symmetric. However, there is no guarantee thatCn is invertible with probability one.

In [3] we describe two ways to circumvent the problem of lack of invertibility ofCn.One such approach is to define

Bn=

Cn ifCn is invertible

I otherwise (1.3)

The success of this approach relies on the fact that ifL(X) is in the Generalized Domain of Attraction of the Normal Law (see (1.6) below for the definition), thenP(Cn=Bn)→1.(See

(2)

74 Electronic Communications in Probability

[3], Lemma 5.) In light of this, we will assume without loss of generality thatCn is invertible.

L(X) is said to be in the Generalized Domain of Attraction (GDOA) of the Normal Law if there exist matricesAn and vectorsvn such that

An n

X

i=1

Xivn N(0, I). (1.6)

One construction ofAn is such thatAn is invertible, symmetric and diagonalizable. See Hahn and Klass [2].

The main result is Theorem 1 below. This result was shown in Sepanski [5]. However, there the proof was based on a highly technical comparison of the eigenvalues and eigenvectors of Cn andAn.There the proof was essentially real valued. The purpose of this note is to give a more efficient proof that is operator theoretic and multivariate in nature. For more details, we refer the interested reader to the original article. In particular, Sepanski [5] contains a more complete list of references.

2. Results

Theorem 1: If the law of X is in the generalized domain of attraction of the multivariate

normal law, then

nAnCn1/2→I in pr.

Proof: LetPn(ω) denote the empirical measure. That is, Pn(ω)(A) = n1Pn

i=1I[Xi(ω)∈A]. Here I is the indicator function. For eachωΩ letX∗

1,· · ·Xn∗ be iid with lawPn(ω). Sepanski [4], Theorem 2, shows that under the hypothesis of GDOA,

An

Sepanski [3], Theorem 1, shows that under the hypothesis of GDOA,

(nCn)−1/2

These two results, together with the multivariate Convergence of Types theorem of Billingsley [1], imply that

(nCn)−1/2=BnRnAn, (1)

where Bn I in pr.,andRn are (random) orthogonal. The proof of Theorem 1 is thereby reduced to showing thatRnI in pr.However, convergence in probability is equivalent to every subsequence having a further subsequence which converges almost surely. This reduces the proof to a pointwise result about the behavior of the linear operators.

WriteAn=QnDnQtnwhereQn is orthogonal andDn is diagonal with nonincreasing diagonal entries. Let Pn =QnRnQtn andKn=QnBnQtn.

kKnIk=kQtnBnQnQtnQnk ≤ kBnIk →0

(3)

A Law of Large Numbers 75

Lemma 2: LetPn be orthogonal. LetDn = diag(λn1,· · ·, λnd) be diagonal such thatλn1≥

λn2 ≥ · · · ≥ λnd >0. Suppose Kn →I. If KnPnDn is positive and symmetric for every n, thenPn→I.

Proof: Given a subsequence ofPn we show that there is a further subsequence along which Pn I.Let En=λ−1

n1Dn. This is a diagonal matrix of all positive entries that are bounded above by 1. Therefore, given any subsequence, there is a further subsequence along which KnI, PnP,andEnE.Necessarily,P is orthogonal andEis diagonal with entries in [0,1].Furthermore,Ehas at least one diagonal entry that is 1 and its entries are nonincreasing. SinceKnPnEn is symmetric, nonnegative andKn I,we have that P E=EPt, andP E is

where E(1) is an m×m invertible diagonal matrix with

m < d. Next, writeP =

From P(1)E(1) =E(1) and the invertibility of E(1), we have that P(1) = Im. Similarly, from

P(3)E(1)= 0 we have thatP(3)= 0.Therefore,P =

The proof continues inductively. Let K(n4), P(n4), E(n4) be the (2,2) block of Kn, Pn, En respectively. P(n4) may not be orthogonal, but P(4) is. Apply the previous argument to

K(n4)P(n4)P(4)t P(4)E(n4).Note thatK(n4)P(n4)P(4)t →IP(4)P(4)t =I,so that we may apply the argument withK(n4)P(n4)P(4)t as the newKnin the induction step. Since the matrices are all finite dimensional, the argument will eventually terminate.

(4)

76 Electronic Communications in Probability

REFERENCES

[1] Billingsley, P. (1966). Convergence of types in k-space. Z. Wahrsch. Verw. Gebiete. 5 175-179.

[2] Hahn, M. G. and Klass, M. J. (1980). Matrix normalization of sums of random vectors in the domain of attraction of the multivariate normal. Ann. Probab. 8262-280.

[3] Sepanski, S.J. (1993). Asymptotic normality of multivariate t and Hotelling’sT2 statistics under infinite second moments via bootstrapping. J. Mutlivariate Analysis,4941-54 [4] Sepanski, S. J. (1994). Necessary and sufficient conditions for the multivariate bootstrap

of the mean. Statistics and Probability Letters,19No. 3, 205-216

Referensi

Dokumen terkait

kept well apart in the field of the metal ion [4,13]. A) Snapshot of structures of hydrated Hf(IV) (first shell, and also second shell) showing coordination numbers of

Setelah mengemukakan berbagai teori yang berkaitan dengan variabel dan masalah yang diteliti, dan juga telah melakukan kajian terhadap berbagai hasil penelitian yang relevan,

Peserta yang mengajukan pertanyaan pada rapat penjelasan ini sebanyak 2 wakil perusahaan dan terdapat 2 pertanyaan yang disampaikan rincian tanya jawab secara on line

Gambar di atas merupakan proses yang dilakukan oleh perusahaan demi tercapainya kinerja perusahaan yang lebih baik lagi, langkah yang dilakukan oleh perusahaan

Menurut definisi-definisi yang telah dijelaskan di atas, maka peneliti dapat menyimpulkan bahwa informasi adalah sebuah sistem yang terdiri dari beberapa subsistem

[r]

Perancangan halaman ubah profil kepala sekolah dapat dilihat pada.

tersebut dapat menyampaikan sanggahan secara tertulis kepada POKJA ULP Politeknik Ilmu Pelayaran Makassar dari tanggal 23 September Jam 08.00 WIB s.d. 25 September 2017