• Tidak ada hasil yang ditemukan

getdoce8c3. 137KB Jun 04 2011 12:05:08 AM

N/A
N/A
Protected

Academic year: 2017

Membagikan "getdoce8c3. 137KB Jun 04 2011 12:05:08 AM"

Copied!
14
0
0

Teks penuh

(1)

COMMUNICATIONS in PROBABILITY

A LAW OF THE ITERATED LOGARITHM FOR

THE SAMPLE COVARIANCE MATRIX

STEVEN J. SEPANSKI

Department of Mathematical Sciences, Saginaw Valley State University, University Center, MI 48710

Email: sepanski@svsu.edu

Submitted5 August 2002Final version accepted24 January 2003 AMS 2000 subject classifications: Primary 60F15; secondary 60F05.

Law of the iterated logarithm, central limit theorem, generalized domain of attraction, extreme values, operator normalization, self normalization, t statistic.

Abstract

For a sequence of independent identically distributed Euclidean ran vectors, we prove a law of the iterated logarithm for the sample covariance matrix when o(log logn) terms are omitted. The result is proved under the hypotheses that the random vectors belong to the generalized domain of attraction of the multivariate Gaussian law. As an application, we obtain a bounded law of the iterated logarithm for the multivariate t-statistic.

Introduction:

LetX, X1, X2,· · ·be independent identically distributed (iid) random column vectors taking

values in Euclidean space, IRd. We use the standard inner product h·,·i and induced norm. Assume that the law ofX is full. That is, the law ofXis not supported on anyd−1 dimensional hyperplane of IRd. We say that the law of X, or merely X, is in the generalized domain of attraction of the multivariate Gaussian law (GDOAG) if there exist operatorsTn and vectors

bn such thatTnSn−bn⇒N(0, I).Here, Sn=Pni=1Xi, I is the identity on IRd, N(0, I) is the

standard Gaussian distribution on IRd and ⇒ denotes weak convergence. Since the standard Gaussian law is an affine transformation of any other Gaussian law, there is no loss of generality in assuming the limit is standard. Also, if the law ofX is in the GDOAG thenEkXk<∞and so, changing Tn accordingly, one can takebn = nEX.See [11], Proposition 8.1.6. Therefore,

there is no loss of generality in assuming that EX = 0. This will be assumed throughout the article. If the law ofX is in the GDOAG we then have that there exist nonsingular operators,

Tn,such that

TnSn⇒N(0, I). (1)

In order to simplify the presentation we extend the definition of the normalizing operators to noninteger values. DefineTx=T[x],where [·] denotes the greatest integer function.

(2)

Meerschaert [10] uses regular variation in IRdto characterize GDOAs. The characterization using regular variation will be central to our approach. The main result we will use from Meerschaert [10] is that the norming operators in (1) can be chosen to satisfy

lim

x→∞kTbxT −1

x −b−1/2Ik= 0 (2)

if the law of X belongs to the GDOAG. Moreover, the convergence is uniform over compact subsets of (0,∞) Meerschaert [10], Theorem 3.1.

In this paper, we investigate the law of the iterated logarithm behavior of the sample covariance matrix Cn = Pni=1XiXiT. It is shown that if X is in the GDOAG, there exist Tn

satisfying (1) and a setBn⊂ {1,2,· · ·, n}with cardinalityrn=o(log logn),such that

Tn/log logn(Cn−Rn)1/2

log logn →I a.s.

Here (·)1/2 is the symmetric square root andR

n=PiBnXiXiT.

Moreover, from this one may obtain a self normalized bounded LIL. In particular, we will show that ifX is in the GDOAG, then

lim sup°° °

Cn−1/2Sn

2 log logn

° °

°= 1 a.s.

These results are multivariate analogues of some univariate results obtained by Griffin and Kuelbs [7] and Gine and Mason [6].

Results:

Fort > 0, letLt= max(1,loget) and let L2t =LLt.Also, let α(t) = t/L2t. Forx∈ IRd

and A⊂IRd, define the distance fromxto A to be d(x, A) = infy∈Akx−yk. For a sequence

of vectors,{xn},denote the cluster set of{xn}byC({xn}) ={y: lim infkxn−yk= 0}.Write

xn→→A ifd(xn, A)→0 andC({xn}) =A.Definetn= Ln2n.Let ¯B={x∈IRd:kxk ≤1}.

We first describe what we mean by an extreme value. Assuming that X is in the GDOAG, withEX = 0,letTnbe as in (1) and (2). For eachn,rearrangeX1,· · ·, Xn asXn(1),· · ·, Xn(n)

so thatkTtnXn(1)k ≥ kTtnXn(2)k ≥ · · · ≥ kTtnXn(n)k. Ties may be broken in any arbitrary

manner. Next, define Sn(r)=Sn−Pri=1Xn(i) andSn(0) =Sn.Finally, for any sequence rn>0

defineRn=Pri=1n Xn(i)Xn(i)T.Ifrn= 0,define Rn= 0.

Following are the statements of the main results contained in this article. Proofs and technical lemmas are contained in the subsequent section.

Theorem 1: LetX be in the GDOAG with EX = 0.There exist operatorsTn,satisfying (1)

and (2), and scalarsrn =o(L2n) such that

Ttn(Cn−Rn)Tt∗n

L2n →I a.s.

If we can pass the operator square root through the limit in Theorem 1, we will obtain the following corollary. To do so requires the operatorsTn to be positive and symmetric. The proof

(3)

symmetric. On the other hand, the operators of Hahn and Klass, [8], are positive and symmetric, but may not satisfy (2). Therefore, we do not include this condition in the statement of the corollary. Billingsley’s multivariate convergence of types theorem allows us to switch back and forth between the two approaches to normalizing operators, utilizing whichever property we need, regular variation or symmetry.

Corollary 2: LetX be in the GDOAG withEX= 0.There exist positive symmetric operators

An satisfying (1) and scalarsrn=o(L2n) such that

Atn(Cn−Rn)

1/2

L2n →

I a.s.

From Sepanski [15], wherein was proved a trimmed LIL with the operatorsTn,and Corollary

2 above, we will obtain the following self normalized LIL.

Corollary 3: LetX be in the GDOAG withEX = 0.There existrn =o(L2n) such that

(Cn−Rn)−1/2Sn(rn)

2L2n →→

¯

B a.s.

Theorem 4: IfX is in the GDOAG withEX = 0,then

lim sup

n ° ° °

Cn−1/2Sn

2 log logn

° °

°= 1a.s.

Theorem 4 stands in comparison to the real valued Theorem 1 of Griffin and Kuelbs [7]. There they show that if X is in the Domain of Attraction of a Gaussian Law in IR,then

Pn

i=1(Xi−EXi) p

2L2nPni=1Xi2

→→[−1,1] a.s.

To see why results such as Theorem 1 and its corollaries may be true, consider first the case whereEkXk2<.In this case, the covariance matrix,C=E(XXT) exists. Moreover, by the

classical case of the Central Limit Theorem, one may take Tn = (nC)−1/2 in (1) and (2). In

which case,

TtnCn1/2

L2n

=C

−1/2C1/2 n

n →I a.s.

by the Strong Law of Large Numbers. Hence, Corollary 2 holds with rn ≡ 0, and therefore

Rn ≡ 0. From this, and the classical LIL, one may infer the conclusions of the other three

results.

The main impetus of this article is to extend the results to the case where X is in the GDOAG butEkXk2 =. Here, accounting for the influence of extreme terms is paramount.

Although there may be extreme terms, they are few in number. In fact, they are on the order of o(L2n). Also, note that Theorem 4 contains no trimming. Heuristically, this is due to the

(4)

Proofs: We begin by constructing the sequence{rn}.This construction mirrors that of Kuelbs

Gaussian convergence guarantees thatξn↓0.By basic calculus, ξnL ³ ξ

n

Γ(δtn)

´

→ ∞,∀δ >0.

Replacing ξn by anything larger does not alter the second of these, so we may assume,

without loss of generality thatξn↓0, ξnL

∞.Having defined the sequencern,we now proceed to the proof of Theorem 1.

Proof of Theorem 1: Fixa >1, ρ >0.Letnk= [ak],andIk = (nk, nk+1].For 1≤j ≤nk+1,

define the random matrices,

uj =uj(k, a, ρ) =XjXjTI[kTtnkXjk ≤ρ]

wj =wj(k, a, ρ) =XjXjTI[kTtnkXjk> ρ]

Next, forn∈Ik define the random truncated covariance matrices

Un=Un(a, ρ) =

We analyze each of the sequences according to the following Lemmas. Note that Cn −Rn =

Un+Wn−Rn+nEXXTI[kTtnkXk ≤ρ].

(5)

Note that the left side here no longer depends on ρand v. First, we letv =√ρ,and then let

Intersecting overθ andφin a countable dense subset of the unit sphere yields for everya >1,

lim sup

Finally, we need to eliminate the dependence ona >1 by eliminating the subsequencenk from

the left side. This essentially follows from (2).

First, observe that sincetnis eventually increasing we have that forn∈Ik, tnk ≤tn≤tnk+1.

Therefore, dividing through bytnk,we have that 1≤ ttn

nk ≤

tnk+1

tnk .The latter converges toaand

therefore is eventually bounded by 2a.Hence, for largek,{ tn

tnk}n∈Ik⊂[1,2a].Applying (2) over

this compact set yields maxn∈IkkTtnT−

1

tnk−(ttnkn )−1/2Ik ≤supb∈[1,2a]kTbtnkTt−nk1−b−1/2Ik →0.

From this we obtain

lim sup

Putting the three facts outlined above together yields,

lim sup

To complete the proof of Theorem 1 (subject to Lemmas 5-7) leta↓1 through a countable set.

Proof of Lemma 5: The proof uses classical blocking arguments and the Borel-Cantelli Lemma along the subsequence nk. A maximal inequality (Ottaviani’s) allows us to pass to the full

(6)

Fix unit vectors θ, φ. Let ǫ > 0, ξ > 0 be given. Apply Ottaviani’s inequality (see, for example, Dudley, [3], p.251) to obtain

P³max

case, at least for sufficiently largek,we apply Chebychev’s inequality.

P³¯¯

This converges to zero, askgoes to infinity, due to the fact that the first factor does, while the second factor converges toaand the third factor is bounded by the truncated variance condition of Gaussian convergence.

We will show that the last series in (3) is finitely summable over k. In order to do so, we require another Lemma. The lemma is based on an inequality of Pruitt [12]. For iid ran-dom vectors, Yj and unit vectors θ, φ, define Zj(θ, φ) = hYj, θihYj, φiI[kYjk ≤ ρ]. Also, write

. Also, 1 +t ≤et,t. Dropping subscripts, applying these inequalities to the random

variableZ(θ, φ), (note that|Z(θ, φ)| ≤ρ2), taking expectations, and using Jensen’s inequality,

Below, we apply Markov’s inequality in the second step and the previous inequality in the last step. We have the following.

(7)

Take t =nEZ(θ, φ) +nuρ2euρ

by the basic convergence criteria for astandard Gaussian limit (see [11], Cor. 3.3.12). Using the above, and the fact thatnk+1/nk →a,we have that for allξ >0,and for sufficiently large

Let the right hand side of this inequality beǫ. Continuing the string of inequalities started in (3), by containment we then have that, for sufficiently largek,

P³¯¯

This last series is finitely summable. Therefore, recalling the definition ofǫ,the above display, (3), and the Borel-Cantelli Lemma yield that for allξ >0, ρ >0, a >1, v >0,and for all unit

Now letξ↓0.Note that the left side does not depend onξ.This completes the proof of Lemma 5.

Proof of Lemma 6: Recall that Xn(1),· · ·, Xn(n) are an ordering of X1,· · ·, Xn under the

operator Ttn. Because in Lemma 6 we are normalizing by the operator Ttnk, we must also

consider an ordering ofX1,· · ·, Xnunder the operatorTtnk.It is possible that the two orderings

are different. However, due to (2), for n∈Ik,they are not significantly different. To this end,

(8)

We now try to bound each of the two terms in (4). Forn∈Ik,letpn=Pni=1I[kTtnkXik>

ρ], and let qn =Pni=1I[kTtnXik > ρa−1].Note that pn andqn depend on k and are random.

By (2), observe that

max

n∈Ik

° ° °TtnkT

−1 tn

° ° °≤maxnI

k

° ° °TtnkT

−1 tn −

³tn

k tn

´−1/2

I°° °+ maxnI

k

³tn

k tn

´−1/2

→√a.

Therefore, for sufficiently largek,it can be made less thana,( sincea >√a,). We assume that this holds for the rest of the proof. Therefore,pn≤qn,forksufficiently large. Assume, for the

moment, that maxn∈Ikqn ≤rnk.Note that sincern↑, this also implies thatpn ≤qn≤rn.

Observe that if Xi ∈ {/ Xn′(1),· · ·, Xn′(pn)} then kTtnkXik ≤ ρ.Otherwise we would have

kTtnk Xn′(1)k ≥ kTtnk Xn′(2)k ≥ · · · ≥ kTtnk Xn′(pn)k ≥ kTtnk Xik > ρ. This is contrary

to the definition of pn. Moreover, kTtnk Xn′(i)k > ρ for i = 1,· · ·, pn. Similarly, if Xi ∈/

{Xn(1),· · ·, Xn(qn)} thenkTtnXik ≤ρ/a,and kTtnXn(i)k> ρ/afori= 1,· · ·, qn.

Consider the first term in (4). If at most rn of X1,· · ·, Xnk+1 satisfykTtnkXik > ρ,then

sincen≤nk+1,at most rn of X1,· · ·, Xn satisfykTtnkXik> ρ. Such summands will then

ap-pear in bothPnj=1TtnkXjXjTt∗nkI[kTtnkk> ρ] andPrj=1n TtnkXn′(j)Xn′(j) T

T∗

tnk.They therefore

cancel out in subtraction. If there are any other such summands, they appear only in the latter, not the former, because they will be zeroed out by the indicator. By the preceeding paragraph, such summands satisfy kTtnkXiXiTTt∗nkk = kTtnkXik 2 ≤ρ2. Furthermore, there are at most

rn−pn≤rn such summands. Therefore,

° ° °

n X

j=1

TtnkXjXjTTt∗nkI[kTtnkk> ρ]− rn

X

j=1

TtnkXn′(j)Xn′(j)TTt∗nk ° ° °≤rnρ

2. (5)

To handle the second term in (4) we have to consider the possibility thatX1,· · ·, Xn may

be ordered differently under Ttn and Ttnk. However, as noted, by (2), the orderings are not

significantly different.

By the definition ofpnandqn,we have that fornsufficiently large,{Xn′(1),· · ·, Xn′(pn)} ⊂

{Xn(1),· · ·, Xn(qn)}.If not,Xn′(i)∈ {/ Xn(1),· · ·, Xn(qn)}for somei= 1,· · ·, pn.In which case,

as observed above,kTtn Xn′(i)k ≤ρ/a. So,kTtnk Xn′(i)k ≤ρ.On the other hand, sincei≤pn,

as observed above,kTtnk Xn′(i)k> ρ.A contradiction.

Because of this containment, there is cancellation in the second term in (4). Indeed,

rn

X

j=1

TtnkXn(j)Xn(j)TTt∗nk − rn

X

j=1

TtnkXn′(j)Xn′(j) T

T∗

tnk

=X

1

TtnkXjXjTTt∗nk − rn

X

j=pn+1

Ttnk Xn′(j)Xn′(j) T

T∗

tnk + rn

X

j=qn+1

Ttnk Xn(j)Xn(j)TTt∗nk.

Here, P

1 extends over allj such that Xj ⊂ {Xn(1),· · ·, Xn(qn)} \ {Xn′(1),· · ·, Xn′(pn)}.

How-ever, for any such Xj, kTtnkXjk ≤ ρa−1 < ρ. Furthermore, the number of such possible j

is qn −pn ≤ rn. In the third summation, each summand is bounded because for j > qn,

kTtnk Xn(j)k ≤ρ.The number of suchj isrn−qn≤rn. For the middle term, ifj > pn,then

kTtnk Xn′(j)k ≤ρ.The number of suchj isrn−pn ≤rn.Therefore,

° ° °

rn

X

j=1

TtnkXn(j)Xn(j)TTt∗nk − rn

X

j=1

TtnkXn′(j)Xn′(j) T

Tt∗nk ° °

°≤3rnρ

(9)

Summarizing, by combining (4-6) , and sincern ≤rnk+1,forn∈Ik we have that

Our goal is now to show that the quantity in (7) is finitely summable. By (2), observe that

max

Therefore, for sufficiently largek,it can be made less thana,( since a >1). Then,

max

The proof is now reduced to showing that

X

The probability here is Binomial, so we need a bound on probabilities, P(B ≥α+ 1), where

(10)

The last inequality here follows from the fact that (1 +xn)nex x, n.

Substituting this into the Binomial bound from Feller yields

P(B≥α+ 1)≤ e√2πexp³n(α+ 1)

The last inequality follows from the fact that α+1√α →0, and so is eventually bounded above by one. Next, letδ= ρa26.By regular variation of Tt,(2), we have thatkTtnkT−

Organizing the preceding paragraphs shows that the series in (8) is, for some constantC >0 which may increase from one line to the next,

≤CX

Utilizing the previous paragraph in (9), for some constantsC >0,which may increase from one inequality to the next,

(11)

This converges as long as M > 2. This shows the series in (7) converges and since ǫ >0 was arbitrary, this completes the proof of Lemma 6.

Proof of Lemma 7: First, we would like to replace thetn withtnk.This can be done via the

triangle inequality, and tn ↑ . Also, note that the quantity in the expectation is nothing but

u=u(k, a, ρ).

as k→ ∞, as we will show presently. The other two being obvious, we must show the second factor of the first term goes to zero. However, this follows from ”Gaussian” convergence. Indeed, Sepanski [16] shows that under mean 0 and GDOAG,TnCnTn∗→I in probability. Considering

this as a triangular array {TnXiXiTTn∗} of random elements in the space of operators from IRd

to IRd, we have TnCnTn∗ is shift convergent to the zero operator, when centered by I.On the

other hand, (see Araujo and Gine, Theorem 5.9) if the triangular array is shift convergent it can be centered by the truncated means. That is TnCnTn∗−nETnXXTTn∗I[kTnXXTTn∗k ≤ δ] is

convergent to the zero operator for allδ >0.Replaceδwithρ2and apply convergence of types to yieldnETnXXTTn∗I[kTnXXTTn∗k ≤ρ2]−I →0.Taking square roots inside the indicator,

(kTnXXTTn∗k =kTnXk2) and applying this along the subsequence tnk completes the proof of

Lemma 7.

Proof of Corollary 2: Let Ttn be as in Theorem 1. Let An be the normalizing sequence

constructed by Hahn and Klass. In particular, the key property we will need is thatAn satisfy

(1) and are positive and symmetric. Since bothAn and Tn satisfy (1), we may apply

Billings-ley’s multivariate Convergence of Types Theorem [2] to conclude that there exist orthogonal transformationsPn, and a sequence of linear operatorsBn→I such thatTn=BnPnAn.From

this it follows that Theorem 1 holds for the sequence An, except that perhaps An may not

satisfy (2). However, the almost sure limit does hold as claimed in the corollary. Indeed, since

AtnTt−n1=P

since the outside factors converge to the identity and the inside factor converges to the identity almost surely by Theorem 1. Since Atn are positive and symmetric and since Cn −Rn are

positive and symmetric as well, Atn(Cn−Rn)1/2

L2n →I a.s.

Proof of Corollary 3: In Sepanski [15] it was shown that TtnSn(rn)

2L2n →→

¯

(12)

Hence, it holds withTtnreplaced by the Hahn and Klass sequence of operatorsAtn.Now combine

with Corollary (2) to obtain the result.

Proof of Theorem 4: The point here is to eliminate the trimming from Corollary 3, both from the normalizing operator, and from the partial sum. This can be done since the two types of trimming, or lack thereof, basically cancel each other out. First, we prove a simple result relating the norms of the trimmed operator to the untrimmed operator.

Lemma 9: Letv1,· · ·, vn be vectors in IRd. Suppose S ⊂ {1,· · ·, n}. Define A =Pni=1vivTi ,

andB=P

i∈SvivTi .Assume thatA is invertible, thenkA−1/2B1/2k ≤1.

Proof: First, clearlyAandBare nonnegative and symmetric, therefore each has a nonnegative and symmetric square root. We denote the unit sphere bySd−1.

° ° °A

−1/2B1/2°° ° 2

=°° °(A

−1/2B1/2)(A−1/2B1/2)∗°° °

=°° °A

−1/2BA−1/2°° °

= sup

θ∈Sd−1

D

A−1/2BA−1/2θ, θE

= sup

θ∈Sd−1

D

BA−1/2θ, A−1/2θE

= sup

θ∈Sd−1

D

BA−1/2³ A1/2θ

kA1/2θk ´

, A−1/2³ A1/2θ

kA1/2θk ´E

= sup

θ∈Sd−1

hBθ, θi kA1/2θk2

= sup

θ∈Sd−1

P

i∈Shvi, θi2 Pn

i=1hvi, θi2 ≤

1.

From Lemma 9 we conclude that kCn−1/2(Cn−Rn)1/2k ≤1. Combining this with Corollary 3,

we obtain

Cn−1/2Sn(rn)

2L2n →→

C⊂B¯ a.s. (10)

Of course,C is nonrandom due to the zero-one law.

Next, we show that, up to a set of measure zero, C−n1/2S

(rn)

n

2L2n and

C−1/2 n Sn

2L2n have the same

cluster sets. We achieve this by showing the difference goes to zero with probability one. Using the notation of Lemma 9, but denotingb=P

i∈Svi,and applying a similar argument,

we see that

kA−1/2b

k= sup

θ∈Sd−1

|hA−1/2b, θ

i|

= sup

θ∈Sd−1

|hb, θi| kA1/2θk

= sup

θ∈Sd−1

|P

i∈Shvi, θi| ³Pn

i=1hvi, θi2 ´1/2

≤ sup

θ∈Sd−1

P

i∈S|hvi, θi| ³Pn

(13)

≤ sup

θ∈Sd−1

√ #S³P

i∈Shvi, θi2 ´1/2

³Pn

i=1hvi, θi2 ´1/2

≤p#S

The second to last inequality follows from Cauchy-Schwarz. Here #S denotes the cardinality ofS. Applying the above toA=Cn,and b=Pri=1n Xn(i), yieldskCn−1/2(Sn−Sn(rn))k ≤√rn.

From this we conclude that

° ° °

Cn−1/2(Sn−S(rnn))

√ 2L2n

° ° °≤

r r n

2L2n →0. (11)

(10) and (11) combine to yield

Cn−1/2Sn

2L2n →→

C⊂B¯ a.s. (12)

This implies the upper bound in Theorem 4. To show the lower bound in Theorem 4, we appeal to Theorem 1 of Griffin and Kuelbs, [7]. First, observe that by Cauchy-Schwarz,

|hSn, θi|=|hCn1/2Cn−1/2Sn, θi|=|hCn−1/2Sn, Cn1/2θi| ≤ kCn−1/2Snk kCn1/2θk=kCn−1/2Snk ³Xn

i=1

hXi, θi2 ´1/2

Dividing through this inequality by √2L2n ³Pn

i=1hXi, θi2 ´1/2

yields for any unit vectorθ,

1≤lim sup |hSn, θi| √

2L2n ³Pn

i=1hXi, θi2

´1/2 ≤lim sup

kCn−1/2Snk

2L2n ≤

1 a.s.

The first inequality holds by Griffin and Kuelbs, [7], Theorem 1, which is applicable due to the fact that ifX is in the GDOA of the multivariate Gaussian law then, for anyθ,hX, θiis in the DOA of the univariate Gaussian law. The last inequality holds by (12).

REFERENCES

[1] Araujo, and Gine, E. (1980)The Central Limit Theorem for Real and Banach Valued Random Variables. John Wiley and Sons. New York.

[2] Billingsley, P. (1966) Convergence of types in k-space. Z. Wahrsh. Verw. Gebiete. 5

175-179

[3] Dudley, R.M. (1989)Real Analysis and Probability. Chapman and Hall. New York.

[4] Feller, W.(1968)An Introduction to Probability Theory and its Applications, Volume 1. Third Edition. John Wiley and Sons. New York.

[5] Feller, W.(1968) An extension of the law of the iterated logarithm to variables without variance.J. Math. Mechan. 18, 343-355.

[6] Gine, E. and Mason, D.M. (1998) On the LIL for self-normalized sums of iid random variables.

(14)

[7] Griffin, P.S. and Kuelbs, J.D. (1989) Self normalized laws of the iterated logarithm. Ann. Probab. 17, 1571-1601.

[8] Hahn, M.G. and Klass, M.J. (1980) Matrix normalization of sums of random vectors in the domain of attraction of the multivariate normal. Ann. Probab. 8, 262-280.

[9] Kuelbs, J. and Ledoux M. (1987) Extreme values and the law of the iterated logarithm. Prob. Theory and Rel. Fields. 74, 319-340.

[10] Meerschaert, M. (1994) Norming operators for generalized domains of attraction. J. Theoret. Probab. 7, 793-798.

[11] Meerschaert, M.M. and Scheffler, H. P.(2001) Limit Theorems for Sums of Independent Random Vectors. Wiley. New York.

[12] Pruitt, W.E. (1981) General one sided laws of the iterated logarithm. Ann. Probab. 9, 1-48.

[13] Sepanski, S. (1999) A compact Law of the Iterated Logarithm for random vectors in the generalized domain of attraction of the multivariate gaussian law. J. Theoret. Probab. 12, 757-778

[14] Sepanski, S. (2001) Extreme Values and the multivariate compact Law of the Iterated Logarithm. J. Theoret. Probab. To appear.

[15] Sepanski, S. (2001) A law of the iterated logarithm for multivariate trimmed sums. Preprint. [16] Sepanski, S. (1994) Asymptotics for multivariate t statistic and Hotelling’s T2 statistic

Referensi

Dokumen terkait

ii. Amati skala pada silinder putar yang tepat berimpit dengan garis horizontal pada batang tetap. Pada Gambar 1.23 skala ke-28 pada silinder putar berimpit

Penelitian ini menganalisis dan memberikan bukti empiris mengenai pengaruh informasi akuntansi (ukuran perusahaan, laba bersih per saham (EPS), leverage keuangan, return on asset

Jika ada sanggahan terhadap hasil keputusan ini dapat disampaikan kepada Panitia Pengadaan Barang/Jasa Bidang Bina Marga Seksi Jalan Dinas Pekerjaan Umum Kabupaten Kotabatru

[r]

ANALISIS BUTIR SOAL UJIAN AKHIR SEKOLAH MATA PELAJARAN FISIKA KELAS XII IPA SMA DI KABUPATEN LAMANDAU TAHUNN.

[r]

Hasil dari penelitian ini yaitu banyaknya variasi citra data pelatihan mempengaruhi tingkat akurasi sistem pengenalan jenis golongan darah dan algoritma yang digunakan

Senam yang digunakan adalah SenamKesegaran Jasmani (SKJ) 2012 yang terdiri dari gerakan :.