• Tidak ada hasil yang ditemukan

Directory UMM :Data Elmu:jurnal:J-a:Journal of Econometrics:Vol99.Issue2.Dec2000:

N/A
N/A
Protected

Academic year: 2017

Membagikan "Directory UMM :Data Elmu:jurnal:J-a:Journal of Econometrics:Vol99.Issue2.Dec2000:"

Copied!
24
0
0

Teks penuh

(1)

On estimation and testing goodness of

"

t for

m-dependent stable sequences

Rohit S. Deo

*

8-57 MEC, 44 West 4th Street, New York, NY 10012, USA

Received 5 June 2000; accepted 25 June 2000

Abstract

A class of estimators of the characteristic index ofm-dependent stable sequences is proposed. The estimators are shown to be consistent and asymptotically normal. In addition, a class of goodness-of-"t tests for stability is also obtained. The performance of the estimators and the goodness-of-"t tests is evaluated through a simulation study. One of the estimators is shown to have a reasonably high relative e$ciency which is uniformly superior to that of the regression estimator, which is currently most widely used. Our results have signi"cance for modelling"nancial data like stock returns which have thick tailed distributions and exhibit non-linear behaviour. ( 2000 Elsevier Science S.A. All rights reserved.

JEL classixcation: C12; C13; C14

Keywords: Thick-tailed distributions; Characteristic index; U-statistics

1. Introduction

Since the seminal papers by Mandelbrot (1963) and Fama (1965), stable distributions have been proposed as models for data from such diverse areas as economics, astronomy, physics and"nance. A prime obstacle in the estimation

*Tel.:#1-212-998-0469; fax:#1-212-995-4003. E-mail address:[email protected] (R.S. Deo).

(2)

of stable distributions has been the lack of closed-form analytical expressions for the densities of all but a few members of the family of stable distributions. Series expansions for the densities have been obtained (Feller, 1971, Vol. II, Chapter XVII), but these are too complex to be of any practical use. As a result, stable distributions are de"ned via their characteristic functions which are given by

/(t)"expMidt!cDtDa[1!ibsgn(t)w(t, a)]N (1) wherew(t, a)"tan(na/2) ifaO1,w(t, a)"!(2/n) logDtDifa"1, and the para-meters satisfy the conditions 0(a)2,!1)b)1,c'0, anddis real. The parameters c and d are measures of scale and location, respectively. The parameter b is a measure of skewness, the density function being symmetric arounddwhenb"0. The density is right skewed whenb'0 and left skewed whenb(0. The parameterais called the characteristic exponent (or index) and governs the tail behaviour of the distribution. The smaller the value ofa, the thicker the tail of the distribution. As a matter of fact, the stable distribution only has moments of orderr(a, except whena"2, in which case all moments exist (Feller, 1971, Chapter XVII, Theorem 1). Whena"2, the normal distribu-tion is obtained with mean dand variance 2c and the parameter b becomes redundant.

There are several alternative representations of the characteristic function of a stable distribution, obtained by various reparametrisations of expression (1), which have been used by researchers. We will use expression (1) throughout since it seems to have been the one used most often (McCulloch, 1986).

Maximum likelihood estimation of the stable distribution parameters is problematic due to the lack of tractable forms for the density functions of stable distributions. Alternative estimators which have been proposed include the regression based estimators of Koutrouvelis (1980) and the quantile method estimators of McCulloch (1986).

The most promising estimator of aseems to be the regression estimator of Koutrouvelis (1980). See Koutrouvelis (1980) and Akgiray and Lamoureux (1989). Koutrouvelis and Bauer (1982) have shown consistency and asymptotic normality when the estimator is computed from a regression with a "xed number of points and without random standardisation of the data. However, the regression estimator used in simulation studies (Koutrouvelis, 1980; Akgiray and Lamoureux, 1989) is one computed from standardised data and involves a regression in which the number of points used depends on the sample size and the value ofa. Thus, the asymptotic normality result of Koutrouvelis and Bauer (1982) cannot be used for inference in such cases.

(3)

distributions but are computationally tedious. Furthermore, they all require the speci"cation of some constants, whose choice will a!ect the size and power of the tests. There does not seem to have been any study so far about the choice of these constants.

In this paper, we consider the twin issues of estimating the characteristic exponent a and of testing a null hypothesis which includes the composite hypothesis of a stable distribution. We propose a class of estimators ofaand corresponding studentised statistics, all of which are asymptotically normal. In addition, a class of formal goodness of "t test statistics are proposed, which asymptotically have as2distribution. Moreover, the asymptotic theory for all these estimators and test statistics is valid for m-dependent stable sequences. This generalisation, which incorporates dependence in the observed series without imposing a linear structure is important from the point of view of modelling"nancial time series like stock returns, which show non-linear behav-iour. Furthermore, the estimators and test statistics do not require knowledge of

m. It should be noted that our theory holds only for"xed and"nite values ofm. Hence, it would not hold for processes with an in"nite amount of dependence like GARCH processes, which have been used to model returns. We also note that some of the current estimation procedures for stable distributions estimate all the parameters, while our procedure estimates onlya.

The layout of the paper is as follows. In Section 2, we suggest a family of estimators ofaand in Section 3 give their asymptotic behaviour. In Section 4, feasible versions of our estimators are described and their asymptotic distribu-tions obtained. A consistent estimator of the asymptotic variance covariance matrix of the family of estimators is suggested in Section 5 and a class of goodness-of-"t tests obtained from it. The performance of our estimators and the goodness-of-"t tests based on them is evaluated in Section 6 through Monte Carlo studies. We summarize the results of the paper in Section 7. All proofs are relegated to the Appendix.

2. Estimating the tail index

Assume thatX

1,2,Xn is a stationarymdependent time series with a

mar-ginal distribution functionF

X()) characterised by the characteristic function/(t)

of (1) and wheremis a"xed and"nite number. In this section, we de"ne a class of estimators for the characteristic exponenta.

Letkandsbe positive integers such thatn'2k'2sand let

I

k,(i1(2(i2k)L(1,2,n),Is,(j1(2(j2s)L(i1,2,i2k). (2)

The setI

k consists of a collection of 2kintegers between 1 andn, while the set I

sconsists of a subcollection of 2sintegers taken fromIk. Using the fact that the

(4)

individual characteristic functions, from (1) we can see that forc"a~1,

We will"nd it convenient to work with the new parameterc. From (3) it follows that logDX

Now, using the fact that for any two positive random variables Z and >,

E(logZ)!E(log>)"E(logZ!log>)"E[log (Z/>)], we see that is an unbiased estimator ofcwhen the elements of I

k satisfy (4). We show in

Lemma A.3 in the Appendix that the expected value ofUHk,sis indeed"nite. The fact that UHk,s is an unbiased estimator of c now motivates us to consider U statistics based on the kernel functionUHk,s as estimators of c.

Let the sets I

k and Is be de"ned as in (2) and letUk,s(Xi1,2,Xi2k) be the

symmetric version of UHk,s(X

i1,2,Xi2k) (i.e. invariant to all permutations of its

arguments). Note that since U

k,s is invariant to all permutations of its

argu-ments, it will depend only onI

k, kand sbut not on the set IsLIk. We now

k,s is no longer unbiased forcdue to the fact that we no

longer impose condition (4) on the elements ofI

k. However, it will be shown (see

Theorem 1 below) thatc(

k,sis asymptotically unbiased forc, due to the fact that

(5)

compared to the total number of terms. Though it is possible to construct an unbiased U statistic forc, such an estimator will require assuming knowledge of the degree of dependence throughm, an assumption we would rather not make. Furthermore, such an unbiased U statistic would be computationally cumber-some and so we will work with the asymptotically unbiased estimator. In the next section, we study the asymptotic behaviour of the class of estimatorsc(

k,s.

3. Asymptotic theory

While studying the asymptotic theory, we will concentrate on the class

c(

k,sa, a"1,2,p, where 2s1(2s2(2(2sp(2k and kis "xed. We could

also, of course, varykbut will not do so to avoid introducing more subscripts. We now de"ne some quantities which are useful in studying the asymptotic behaviour of c(

k,sa. Let U

k,sa,1(x)"EMUk,sa(Xi1,2,Xi2k)DXi1"xN!c, a"1,2,p (7)

where the elements ofI

k satisfy (4). Also let f

1>h,ab"EMUk,sa,1(Xi1)Uk,sb,1(Xi1`h)N, h"0,2,m, a,b"1,2,p

and

f1,ab"f

1>0,ab#2 m + h/1

f1>h,ab a,b"1,2,p. (8)

Furthermore, let

< j,a"

A

n!1

2k!1

B

~1 + Cj

U

k,sa(Xi1,2,Xi2k) (9)

where the sum in (9) is over all the sets inI

k which contain the indexj. We now

state a theorem about the limiting behaviour ofc(

k,s.

Theorem 1. Let the sequenceX1,2,Xn satisfy the assumptions of Section 2.

(i) If

EU2

k,s(Xi1,2,Xi2k)(R for all Ik (10)

thenEMc(

k,saN"c#O(n~1), a"1,2,p.

(ii) If (10) holds then

(2k)~1Jn(bK

k!c1)PD N(0,R), (11)

wherebK

k"(c(k,s1,2,c(k,sp)@,1"(1,2, 1)@ and the(a,b)thelement of Ris p

(6)

(iii) If (10) holds then

EM<

j,a!Uk,sa,1(Xj)!cN2"O(n~1) (12)

uniformly inj"1, 2,2,nfor all a"1,2,p.

(iv) Ifm"0 (i.e. the sequenceMX

tNis independent),then the diagonal elements of

Rare strictly positive.

Remark 1. Using standard Taylor series arguments, one can establish a limiting normal distribution for the class of estimators ofade"ned byc(~1

k,sa.

Remark 2. The asymptotic variance covariance matrix R depends on the

un-known quantitiesU

k,sa,1#cand result (12) shows that they are approximated in

mean square by the observable sequenceM<

j,aN. This result is useful in solving

the problem of estimatingR, which we discuss in Section 5 below.

In the following lemma, the proof of which is given in the Appendix, we provide examples of linear and non-linearm-dependent stable processes which satisfy condition (10) and for which Theorem 1 would hold.

Lemma 1. (i)LetMe

tNbe a sequence of independent stable randomvariables with

marginal distribution functionG

e())characterised by the characteristic function in

(1)and tail indexa.Let X

t"et#+mj/1ajet~j,where theaj are constants. Then X

t(ii)is anLetmMv-dependent linear stable process with tail indexawhich satisxes (10). tNbe a sequence of independent strictly positive stable randomvariables

with tail indexhwhere0(h(1.LetMutNbe a sequence of independent normal randomvariables with mean zero andvariancep2.LetMu

tNbe independent of the Mv

tN.DexneXt"(vt#+mj/1bjvt~j)1@2ut,where thebjare positive constants. Then X

t is a non-linear m-dependent process with a stable distribution and tail index2h

and satisxes (10).

The non-linear stable process de"ned in (ii) of Lemma 1 above is related to the non-linear process that was proposed by de Vries (1991) to model both condi-tional heteroscedasticity and thick tails in"nancial returns. However, de Vries (1991) allowed for an in"nite amount of dependence in his model as opposed to them-dependence here.

Though we have de"ned a class of estimators forc(anda), these estimators are computationally infeasible. The number of items which are averaged in the estimator (6) are of the order ofn2k. For example, a sample of size 100 andk"2 (the smallest possible value for kif no assumptions are made about the sym-metry and location parameters of the distribution ofX

i) would require

(7)

4. Incomplete U statistics

As we noted above, a U statistic can be computationally infeasible even for moderate values of the sample size. However, note that in the average (6) which de"nes our class of estimators, there are also terms which have several argu-ments in common. Thus, these terms will be highly correlated and not provide a signi"cant amount of new information, which suggests that they may be left out without increasing the asymptotic variance. We therefore will consider the class of estimators based on incomplete U statistics of the form

c8k,sa"(N)~1+ D

U

k,sa(Xi1,2,Xi2k), a"1,2,p. (13)

where the sum in (13) is now over a subset of N elements of the set

S

n,k comprising the collection of all sets of the kindIk. The set Dis called the

design of the incomplete U statistic and there are several ways in which it can be chosen. (See Section 4.3 of Lee (1990) for a detailed discussion of incomplete U statistics and choices of D.) Throughout this paper we will restrict our attention to the class de"ned by (13), where theNelements ofDare chosen with replacement fromS

n,k. Along with de"ning an incomplete estimator of c, we

de"ne an incomplete version of<

j,a as follows. Let <@

j,a" n

2kN + Dj

U

k,sa(Xi1,2,Xi2k) (14)

where the sum in (14) is over all the elements ofDwhich contain the integerj. We then have the following theorem for the limiting behaviour ofc8k,sa and<@

j,a.

Theorem 2. Let X

1,2,Xn satisfy the conditions of Section 2 and let

lim

n?=n/N"0.

(i@) If (10) holds, thenEMc8

k,saN"c#O(n~1),a"1,2,p.

(ii@) If (10) holds, then

(2k)~1Jn(bI

k!c1)PD N(0,R) (15)

wherebI

k"(c8k,s1,2,c8k,sp)@.

(iii@) If (10) holds andN~1"O(n~2),then

EM<@

j,a!Uk,sa,1(Xj)!cN2"O(n~1) (16)

uniformly inj"1,2,nfor alla"1,2,p.

Remark 3. The usual Taylor series arguments provide analogous limiting distri-butions for the associated class of estimators ofagiven bya8

(8)

Remark 4. We can see from Theorem 2 that the number of terms required in the incomplete estimatorc8

k,scan be drastically reduced compared to those required

in the complete estimator, while the incomplete estimator still maintains the same asymptotic variance of the complete estimator. However, it is interesting to note that a much stronger rate is required on the number of terms in the incomplete estimator to approximateU

k,sa,1(Xj)#cin mean square at the same

rate as in the complete version. This indicates that in the incomplete versions estimating the variance of the estimators ofcis more problematic than estima-tingcitself.

The numerous possible estimators ofaarise from the fact that the character-istic exponent remains unchanged after adding (and di!erencing) several stable random variables. This fact has been used by several researchers to heuristically test the goodness of"t of stable distributions by computing several estimates of

aafter summing some of the data and then comparing these estimates. However, no theory has been developed for the distributions and standard deviations of the di!erences between these estimates. The family of estimators ofc that we have constructed above also exploit the invariance of c after summing (and di!erencing) the data. However, we show in the next section that an asymptotic distribution can be obtained for the di!erences of these various estimators along with estimates of the corresponding standard errors.

5. Goodness of5t tests

Suppose we wish to test the null hypothesis that a given data setX

1,2,Xn

has been generated by anm-dependent stable process with a marginal distribu-tion funcdistribu-tion that is characterised by the characteristic funcdistribu-tion in (1), for some values ofa, d, candb. A simples2test statistic for this goodness of"t test of stable distributions is obtained directly from Theorem 2 as follows.

Corollary 3. Let the assumptions of Theorem2hold.LethI"(hI1,2,hI

p~1)@where hIi"c8

k,si!c8k,si`1 and RI be a consistent estimator of R. If p'2, assume that

rank(DRD@)"p!1,whereDis the matrix in the transformationhI"Dc8.Then

(2k)~2nhI@(DRID@)~1hIPD s2

p~1. (17)

Furthermore,if p"2andm"0,thenrank(DRD@)"1.

(9)

Remark 7. We conjecture thatDRD@is non-singular forp'2 in the case where

m"0 (i.e. the seriesMX

tNis independent).

We have, so far, left the question of estimating the variances of our estimators unresolved. It is, however, essential to estimate these variances to test hypothe-ses about and construct con"dence intervals foraand also to test the goodness of"t. When the observations are independent (i.e.m"0), numerous estimators of the variance of U statistics are available (Lee, 1990, Chapter 5). The easiest variance estimator to compute is the Jacknife estimator, which is asymptotically equivalent to Sen's estimator (Sen, 1960), based on the quantities<

j,a. Other

estimators include bootstrapped versions, which we believe are computationally cumbersome even for moderate values of nand unbiased versions, which are also computationally tedious and can be negative. When the observed sequence ism-dependent, a consistent estimator of the variance of the U-statistic can be constructed using the quantities<

j,a (Sen, 1963). However, this estimator

pre-supposes knowledge of the order of dependence through m. To sidestep this assumption, we propose using a particular version of the prewhitened kernel estimator of Andrews and Monahan (1992), which we describe below.

Let Z

j"(<@j,1!c8k,s1,2,<@j,p!c8k,sp)@, where <@j,a are as in (14). De"ne R

j"Zj!BKZj~1, j"2,2,p, as the residual vector in the regression ofZj on

Z

j~1. De"ne

XK " n~1+ j/~n`1

k

A

j

S n

B

CK(j)

where

CK(j)"n~1 +n t/j`1

R

tR@t~j for j*0 "n~1 +n

t/~j`1

R

t`jR@t forj(0

whereS

nis the data dependent band-width andk()) is the real valued quadratic

spectral kernel given by

k(x)"25/(12n2x2)

G

sin(6nx/5)

6nx/5 !cos(6nx/5)

H

. (18) For the quadratic spectral kernel, S

n"1.3221(dKn)1@5, where dK is obtained by

regressing R

t on Rt~1 with associated coe$cient matrix A and innovation

covariance matrixband then calculating

dK" 2vecGK@WvecGK

trW(I#K

(10)

where

FK"1

2n (I!AK )~1bK (I!AK@)~1

and

GK"1

2n(I!AK )~3[AKbK#AK2bKAK@#AK2bK !6AKbK AK@ #bK(AK@)2#AKbK (AK@)2#bK AK@](I!AK@)~3.

W is a p2]p2 diagonal weight matrix with 2's for diagonal elements that

correspond to diagonal elements of X and 1's for diagonal elements that

correspond to nondiagonal elements ofX,vecis the vectorisation operator,?is the Kronecker cross product and K

p is a p2]p2 commutation matrix that

transformsvec(A) intovec(A@). The estimate of the covariance matrix is obtained

by recolouring

RK"DK XK DK @ (19)

whereDK"(I!BK)~1. The estimated covariance matrix given by (19) using the

kernel (18) is guaranteed to be positive de"nite. See Andrews and Monahan (1992).

Our next theorem gives the conditions under which this estimator is consis-tent.

Theorem 4. Let the sequenceMX

tNsatisfy the assumptions of Section 2and letRK be

dexned as in (19).Then

plim

n?=

RK"R.

Though we now have a family of estimators ofc, whose asymptotic distribu-tion is known and whose asymptotic variance can be estimated, we have no way of theoretically choosing which particular estimator to use in practice. The asymptotic variance of c8

k,s cannot be examined theoretically to decide which

(11)

6. Simulation study

We generated stable random variables for di!erent values ofaandbusing the method of Chambers et al. (1976). Their procedure generates stable random variables with a di!erent skewness parameterbHgiven by

btanna

2"tan

C

n

2 bHmin(a, 2!a)

D

, aO1.

The transformation frombHtobwas made accordingly to be consistent with the form of the characteristic function given in (1). Since our estimators and tests are scale and location invariant, the values of d and c were set to 0 and 1, respectively. Three estimators of a were studied: (1) a(

1 based on the kernel

functionU

k,swithk"3 ands"2; (2)a(2based onUk,swithk"3 ands"1; (3)

a(

3 based on Uk,s with k"2 and s"1. A fourth estimator obtained by an

estimated generalised least squares procedure using a(

1,a(2 and a(3 and their

estimated variance covariance matrix was also studied. However, since the"rst three estimators always outperformed it in"nite samples, we have not included results on this fourth estimator. The study was carried out for samples of size 100 and 1000 with 1000 replications in each case. In all cases, only incomplete estimators of the form given in (13) were computed, the summands in the estimator being sampled with replacement. For samples of size 100,N"500,000 was used and for samples of size 1000,N"4,000,000 was used. These values of

Nwere arrived at after a pilot simulation study, in which it was observed that increasingNbeyond these two particular values did not change the end results substantially. The initial simulation study also showed that the asymptotic distribution of the estimatorsa(

i is attained for much smaller values ofNthan

those given above. It was only to obtain better behaved t statistics (which depend on the estimated variance of the estimator), that such large values of

Nwere required. These observations are in keeping with Remark 4 of Section 4. The variance covariance matrix of the vectora("(a(

1, a(2, a(3)@was estimated by

the prewhitened kernel estimator (denoted by RK

PW) described in (19) above,

which does not assume any knowledge of the degree of dependence between the observations. We also carried out computations fortstatistics using an alterna-tive estimator of the variance covariance matrix, using the method suggested by Sen (1963). This alternative estimator requires knowledge of the value ofmand hence of the degree of dependence. Since the results we obtained with this alternative estimator were very similar to those obtained usingRK

PW, we do not

report them here for the sake of brevity.

(12)

Table 1

Asymptotic e$ciencies

(a,b) a(

1 a(2 a(3 a(K

(1.95, 0) 86 94 88 75

(1.90, 0) 80 93 93 63H

(1.70, 0) 69 88 97 78H

(1.50, 0) 64 85 97 89

(1.30, 0) 61 82 96 65H

(1.00, 0) 55 75 87 68

(0.80, 0) 51 70 82 57

(1.95, 0.5) 81 88 82 71

(1.9, 0.5) 75 87 86 70H

(1.7, 0.5) 61 78 86 80H

of the estimators computed in 1000 replications from i.i.d. observations of sample size 1000 each. For the sake of comparison, we have also included, wherever possible, relative e$ciencies for the Koutrouvelis estimator ofa (de-noted in the table bya(

K) obtained by using the Monte Carlo RMSE given in

Table 1 of Akgiray and Lamoureux (1989). For some parameter con"gurations, the e$ciencies fora(

K were not available in Akgiray and Lamoureux (1989). In

these cases, they were obtained from Table 4 of Koutrouvelis (1980) and are marked with an asterisk. It is seen thata(

3has uniformly higher e$ciency than a(

Kover all the parameter values considered. Furthermore, the e$ciency ofa(3is

uniformly reasonably high, never falling below 82% and going as high as 97%. However, it is not uniformly higher than the e$ciency of a(

2. At values of ahigher than 1.9,a(

2has the highest e$ciency.

Table 2 compares the performance ofa(

2 anda(3 when the observations are

independent. For reasons of brevity we have excluded results fora(

1, since it was

outperformed by botha(

2anda(3over all the parameter values considered except

ata"2. We have also included relevant"gures for the Koutrouvelis estimator of a (denoted in the table by a(

K) obtained from Table 1 of Akgiray and

Lamoureux (1989). The mean and root mean square error (RMSE) of a(

K

re-ported in Akgiray and Lamoureux (1989) are for values ofa(

Ktruncated at 2. For

the purpose of comparison we have also reported, wherever applicable, the mean and RMSE of the truncated versions our two estimators (truncated at 2). The mean and RMSE of the truncated versions of our estimators are reported in parentheses under the mean and RMSE of the unrestricted estimators. It is worth noting again that there is no asymptotic theory available for thea(Kshown in the tables since it is obtained from randomly standardised data used in a regression at points which depend on the sample size anda.

From Table 2, we see that forn"100, our estimators almost always have a much lower RMSE than that ofa(

(13)

Table 2

i.i.d. observations

n"100 n"1000

(a,b) Method Mean RMSE Mean RMSE

(2.00, 0) a(

2 2.005 0.0676 2.001 0.0171

a(

3 2.008 0.0892 2.001 0.0216

(1.95, 0) a(

2 (1.937)1.955 (0.0884)0.1050 (1.951)1.952 (0.0289)0.0298

a(

3 1.957 0.1158 1.952 0.0313

(1.932) (0.0914) (1.951) (0.0298)

a(

K 1.936 0.0944 1.947 0.0323

(1.80, 0) a(

2 1.811 0.1570 1.803 0.0459

(1.807) (0.1502)

a(

3 (1.806)1.812 (0.1464)0.1550 1.802 0.0466

a(

K 1.802 0.1503 1.799 0.0494

(1.50, 0) a(

2 1.516 0.1810 1.501 0.0528

a(

3 1.515 0.1714 1.501 0.0494

a(

K 1.503 0.1927 1.500 0.0517

(1.25, 0) a(

2 1.268 0.1706 1.251 0.0483

a(

3 1.267 0.1596 1.251 0.0448

a(

K 1.217 0.1883 1.250 0.0455

(1.00, 0) a(

2 1.018 0.1458 1.001 0.0403

a(

3 1.017 0.1353 1.001 0.0373

a(

K 0.958 0.1310 1.000 0.0422

(0.80, 0) a(

2 0.814 0.1171 0.800 0.0324

a(

3 0.812 0.1073 0.800 0.0299

a(

K 0.780 0.1025 0.799 0.0359

(1.95, 0.5) a(

2 1.956 0.1055 1.952 0.0299

(1.937) (0.0884) (1.951) (0.0291)

a(

3 (1.932)1.958 (0.0912)0.1164 (1.951)1.952 (0.0301)0.0315

a(

K 1.936 0.0936 1.946 0.0324

(1.8,!0.5) a(

2 (1.808)1.811 (0.1509)0.1564 1.802 0.0446

a(

3 (1.806)1.812 (0.1473)0.1559 1.801 0.0431

a(

K 1.803 0.1504 1.800 0.0484

(1.65, 0.9) a(

2 (1.666)1.667 (0.1751)0.1763 1.653 0.0538

a(

3 1.667 0.1704 1.653 0.0509

(1.666) (0.1687)

a(

K 1.670 0.1656 1.650 0.0539

values of a(a)1), though its bias increases. The estimatora(

3 has either the

(14)

Table 3

MA(1) dependent series,h"0.5

n"100 n"1000

(a,b) Method Mean RMSE Mean RMSE

(2.00, 0) a(

2 2.002 0.0680 2.000 0.0182

a(

3 2.002 0.0894 2.000 0.0232

(1.95, 0) a(

2 1.964 0.1030 1.949 0.0362

a(

3 1.964 0.1162 1.949 0.0365

(1.80, 0) a(

2 1.829 0.1678 1.801 0.0594

a(

3 1.826 0.1679 1.801 0.0564

(1.50, 0) a(

2 1.548 0.2206 1.503 0.0713

a(

3 1.542 0.2109 1.503 0.0659

(1.95, 0.5) a(

2 1.964 0.1036 1.949 0.0359

a(

3 1.965 0.1168 1.949 0.0363

a(

1has the lowest RMSE (not shown in the tables), followed bya(2anda(3and the

RMSE ofa(

3is almost twice that ofa(1.a(3is also outperformed whena"1.95,

though not as badly as whena"2. Similar results are noticed whenn"1000, though the bias is much smaller due to largen.

Table 3 comparesa(

2anda(3when the observations are dependent and come

from a moving average of order one with parameter (h) set equal to 0.5. There are no entries for the Koutrouvelis estimator (a(

K) in Table 3 since its properties

have not been studied when the observations are dependent. From Table 3, we see that forn"100, there is no uniformly superior estimator. For a given value ofa, the bias in the estimators is higher than in the corresponding case when the observations are independent. Furthermore, the lower the value ofa, the higher the bias in the estimators. For n"1000, the bias in the estimators due to dependence is much smaller, though the overall pattern of results is the same as whenn"100.

In Table 4 we compare the non-coverage probabilities of the t statistics computed froma(

2anda(3when the observations are independent. Forn"100,

the non-coverage probabilities of the t statistics are much higher than the nominal 5% and 10% levels for all values ofa. Thus, the con"dence intervals based on these t statistics will produce anti-conservative intervals for small samples. The con"dence intervals are most anti-conservative at a"1.8 but become less so asamoves away from 1.8. Furthermore, the intervals computed using a(3 are the least anti-conservative. When n"1000, the non-coverage probabilities are much closer to the nominal 5% and 10% levels and are mostly below them. Thus, the con"dence intervals based on the t statistics will be conservative. The intervals tend to be more conservative at lower values of

(15)

Table 4

Non-coverage probabilities fortstatistics; i.i.d. observations

n"100 n"1000 (a,b) a(

2 a(3 a(2 a(3

5% 10% 5% 10% 5% 10% 5% 10%

(2.00, 0) 0.078 0.127 0.051 0.105 0.035 0.074 0.020 0.057 (1.95, 0) 0.132 0.188 0.088 0.138 0.072 0.115 0.050 0.090 (1.80, 0) 0.183 0.244 0.117 0.171 0.073 0.105 0.054 0.096 (1.65, 0) 0.174 0.239 0.110 0.173 0.053 0.104 0.044 0.098 (1.50, 0) 0.151 0.216 0.096 0.171 0.047 0.092 0.041 0.094 (1.25, 0) 0.135 0.197 0.096 0.160 0.041 0.082 0.038 0.079 (1.00, 0) 0.121 0.182 0.095 0.160 0.040 0.087 0.036 0.090 (0.80, 0) 0.118 0.165 0.089 0.150 0.040 0.091 0.037 0.084 (1.95, 0.5) 0.137 0.185 0.089 0.140 0.072 0.112 0.051 0.091 (1.80,!0.5) 0.187 0.260 0.121 0.184 0.061 0.104 0.047 0.098 (1.65, 0.9) 0.164 0.217 0.115 0.167 0.060 0.099 0.057 0.091

Table 5

Non-coverage probabilities fortstatistics; MA(1) seriesh"0.5

n"100 n"1000 (a,b) a(

2 a(3 a(2 a(3

5% 10% 5% 10% 5% 10% 5% 10%

(2.00, 0) 0.074 0.121 0.048 0.093 0.052 0.091 0.045 0.073 (1.95, 0) 0.140 0.197 0.090 0.146 0.098 0.142 0.062 0.111 (1.80, 0) 0.236 0.292 0.156 0.212 0.074 0.124 0.067 0.106 (1.65, 0) 0.242 0.295 0.182 0.239 0.063 0.109 0.058 0.098 (1.50, 0) 0.229 0.283 0.179 0.240 0.049 0.095 0.044 0.081 (1.95, 0.5) 0.141 0.203 0.089 0.153 0.100 0.128 0.061 0.102 (1.80,!0.5) 0.242 0.300 0.163 0.224 0.077 0.118 0.055 0.096

When the observations are generated by a moving average of order one with parameter set to 0.5, the non-coverage probabilities of thetstatistics computed from a(

2 and a(3 are compared in Table 5. For n"100, the non-coverage

(16)

Table 6

Size calculations for goodness-of-"t tests; i.i.d. observations

(a,b) n Z

1 Z2 Z3 s22

5% 10% 5% 10% 5% 10% 5% 10%

(2.00, 0) 100 0.033 0.069 0.028 0.059 0.017 0.046 0.014 0.026 1000 0.006 0.027 0.004 0.017 0.000 0.000 0.000 0.004 (1.95, 0) 100 0.042 0.069 0.034 0.063 0.021 0.045 0.023 0.036 1000 0.023 0.053 0.018 0.044 0.006 0.013 0.005 0.017 (1.80, 0) 100 0.043 0.089 0.038 0.084 0.032 0.062 0.028 0.039 1000 0.035 0.072 0.031 0.066 0.011 0.030 0.013 0.031 (1.65, 0) 100 0.073 0.130 0.064 0.123 0.057 0.106 0.039 0.063 1000 0.043 0.091 0.036 0.076 0.015 0.040 0.018 0.032 (1.50, 0) 100 0.098 0.175 0.095 0.162 0.086 0.147 0.062 0.092 1000 0.039 0.095 0.039 0.088 0.020 0.050 0.014 0.031 (1.95, 0.5) 100 0.044 0.081 0.040 0.073 0.028 0.053 0.023 0.040 1000 0.027 0.058 0.014 0.047 0.004 0.014 0.006 0.016 (1.80,!0.5) 100 0.046 0.102 0.041 0.097 0.039 0.085 0.024 0.049 1000 0.038 0.090 0.032 0.072 0.013 0.036 0.014 0.027

tend to be anti-conservative. Though the intervals based on a(

3 are

anti-conservative, their coverage probabilities never fall more than 2% below the nominal probabilities.

We also studied four goodness-of-"t tests based on our three estimators of

afor the composite null that the observations are from anm-dependent stable sequence. The four tests were (i) the test based ona(

1!a(2denoted byZ1, (ii) the

test based ona(

1!a(3denoted byZ2, (iii) the test based ona(2!a(3denoted by Z

3, and (iv) thes2test with 2 degrees of freedom based on Z1 and Z2 and

denoted bys22. In Table 6 we compare the sizes of these four tests when the observations are independent. Forn"100, theZtests are undersized for large values ofa. Asagoes below 1.8 however, these tests start becoming oversized, exceeding the nominal size by as much as 8% whena"1.00. The test based on

Z

3is the most undersized whenais large and also the least oversized for small

values ofa. Thes22test is generally undersized for most values ofaand exceeds the nominal size by only about 3.5% whena"1.00. Forn"1000, all the tests are undersized with theZ

3test and thes22test being the most so. In Table 7, the

(17)

Table 7

Size calculations for goodness-of-"t tests; MA(1) seriesh"0.5

(a,b) n Z

1 Z2 Z3 s22

5% 10% 5% 10% 5% 10% 5% 10%

(2.00, 0) 100 0.027 0.062 0.022 0.057 0.017 0.040 0.017 0.028 1000 0.019 0.038 0.010 0.029 0.000 0.005 0.003 0.013 (1.95, 0) 100 0.036 0.069 0.036 0.064 0.025 0.050 0.021 0.032 1000 0.021 0.052 0.014 0.041 0.003 0.009 0.004 0.017 (1.80, 0) 100 0.059 0.114 0.055 0.110 0.045 0.091 0.034 0.053 1000 0.054 0.099 0.043 0.091 0.013 0.032 0.020 0.041 (1.65, 0) 100 0.087 0.141 0.090 0.140 0.078 0.129 0.047 0.083 1000 0.077 0.121 0.062 0.115 0.027 0.056 0.030 0.060 (1.50, 0) 100 0.098 0.171 0.090 0.166 0.088 0.152 0.056 0.095 1000 0.071 0.122 0.092 0.121 0.036 0.076 0.035 0.058 (1.95, 0.5) 100 0.042 0.070 0.038 0.070 0.026 0.049 0.025 0.038 1000 0.020 0.049 0.104 0.037 0.002 0.009 0.003 0.014 (1.80,!0.5) 100 0.065 0.104 0.060 0.099 0.057 0.092 0.041 0.058 1000 0.052 0.097 0.043 0.082 0.011 0.036 0.019 0.037

Table 8

Power calculations; i.i.d. observations fromtdistributions

d.f. n Z

1 Z2 Z3 s22

5% 10% 5% 10% 5% 10% 5% 10%

3 100 0.118 0.210 0.123 0.211 0.127 0.205 0.080 0.114 1000 0.844 0.903 0.823 0.893 0.660 0.787 0.710 0.803 4 100 0.093 0.163 0.089 0.161 0.084 0.157 0.050 0.076 1000 0.877 0.940 0.842 0.919 0.574 0.739 0.746 0.834 5 100 0.068 0.125 0.070 0.126 0.066 0.117 0.041 0.057 1000 0.806 0.893 0.743 0.871 0.431 0.587 0.632 0.755 6 100 0.054 0.100 0.050 0.097 0.050 0.094 0.025 0.051 1000 0.754 0.848 0.679 0.800 0.293 0.468 0.528 0.688

power of these four tests when the observations are drawn independently from

t distributions. The s22 test is biased for samples of size 100, which is not surprising since it is so undersized. TheZ1test generally has the highest power and theZ

3test the lowest. Furthermore, the power of all tests decreases as the

degrees of freedom increase, which may be attributable to the fact that the

(18)

7. Summary

We propose a class of estimators of the characteristic index ofm-dependent stable sequences. These estimators are location and scale invariant and robust to the skewness parameterb. One of these estimators,a(

3, is shown to have a high

relative asymptotic e$ciency that is uniformly superior to that of the regression estimator, which is currently most widely used. In addition, we have also obtained goodness-of-"t tests for the composite null hypothesis that the obser-vations are stably distributed. Work is under way to extend the methods given in the paper to estimating all the parameters of stable distributions.

Acknowledgements

I am indebted to Professor Wayne A. Fuller and Professor Cli!ord M. Hurvich for helpful discussions. Any errors are my responsibility solely.

Appendix

Proof of Theorem 1. Parts (i) and (ii) are a direct application of Theorems 2.2 and 2.3 of Sen (1963). The proof of part (iii) is as follows. Lettingl"2k, de"ne

<mj

,a"B~11 (j, mDn,l)+ C0j

U

k,sa(Xj,Xi1,2,Xil~1), j"1,2,n

and

<0

j,a"B~11 (j, mDn,l)+ Cj

U

k,sa(Xj,Xi1,2,Xil~1)

whereC

jis the collection of alll-tuples fromM1,2,nNwith the integerjin them, C

0j"Mj, i1,2,il~13M1,2,nN DminaEbDia!ibD'm, minaDia!jD'mN and B

1(j, mDn,l) is the cardinality ofC0j. By (10), it follows that for someM(R,

EM<

mj,a!<0j,aN2)MB~21 (j,mDn,l)

GA

n!1

l!1

B

!B1(j, mDn, l)

H

2

.

By successively using Lemma A.1 below and Lemma 4.7 of Sen (1963), it follows that

EM<m

j,a!<0j,aN2)MB~21 (j,mDn,l)

GA

n!1

l!1

B

!

A

n!1!ml

l!1

BH

2

)MB~21 (j,mDn, l)

G

ml+ s/1

A

n!1!s

l!2

BH

2

(19)

A similar argument gives EM<0

j,a!<j,aN2"O(n~2). (A.2)

Part (iii) now follows from (A.1), (A.2) and the proof of Theorem 2.3 of Sen (1963). Part (iv) follows by noting that

EMU

By Lemma A.2 below, it follows that lim

@x@?=EMUk,s(Xi1,2,Xi2k)DXi1"xN"R,

thus showing thatU

k,s(Xi1,2,Xi2k) is a non-degenerate random variable and

hence has"nite positive variance by (10). h

Proof of Theorem 2. Denoting the 2k-tuples of indices byIand the correspond-ing 2k-tuples (X

i1,2,Xi2k) byXI, we can write the incomplete estimatorc8k,sa as c8

k,sa"N~1+ZIUk,sa(XI)

where the sum extends over all possible (2kn) 2k-tuples and the variables (Z I) have

a multinomial distribution with parameters N and 1/(2kn),2,1/(n

2k). Since

EMZ

IN"N/(2kn) and the variables (ZI) are independent of the sequenceMXtN, (i)

follows from Theorem 2.1 of Sen (1963). Part (ii) follows from Lemma 1 of Janson (1984) and Theorem 2.2 of Sen (1963). Finally, an argument paralleling that in Lemma 1 of Janson (1984) gives

EM<@

j,a!<j,aN2"O(N~1n2)"O(n~1). (A.3)

In conjunction with (iii) of Theorem 1, this yields (iii@).

Proof of Corollary 3. The limiting chi-square distribution follows directly from Theorem 2. The non-singularity ofDRD@ in the casep"2 and m"0 can be shown by an argument similar to the one used for part (iv) of Theorem 1.

Proof of Theorem 4. To prove the consistency ofRK, we merely have to verify that

assumptions A}D of Andrews and Monahan (1992) are met. Assumption A is satis"ed since U

k,sa,1(Xt) is a stationary zero mean m-dependent sequence. By

Theorem 2 above and Lemma 1 of Janson (1984), it follows that EM<@

j,a!c8k,sa!Uk,sa,1(Xj)N2"O(n~1). Thus, assumption B(ii) of Andrews and

Monahan (1992) is met. Assumption C is satis"ed by the choice of the data dependent bandwidth (Andrews and Monahan, 1992). Finally, Assumption D is satis"ed sinceU

k,sa,1(Xt) is a stationary zero meanm-dependent sequence and

EM<@

(20)

Lemma A.1. Let B

Theorem 4.2 of Sen (1963), the cardinality of C

l,m is (n~2m~1~(l~1)m`ml~1 ). Let

l,mcorresponds to an element inC0j, the cardinality ofC0jis

greater than the cardinality ofC l,m.

Lemma A.2. Let Y be a stable randomvariable with characteristic function given by

(1).ThenEMlog(1!>/n)2Nis uniformly bounded for all n.

n()) and fn()) be the characteristic function and the density function

respectively of 1!>/n. Then from (1) it follows easily thatD/

n(t)D(exp(!cDtDa)

and hence by Theorem 3, Chapter XV, Vol. II of Feller (1971) we get

(21)

Proof. Note that since E log(Z/>)"E(logZ)!E(log>) for any two positive

random variables > and Z, it su$ces to show that

E(logDX

i1#2#Xik!Xik`1#2!Xi2kD)(R. When the elements ofIk

sat-isfy (4), the variables (X

i1,2,Xi2k) are independently and identically distributed

and hence, using the characteristic function (1) it is easy to show that

X

i1#2#Xik!Xik`1#2!Xi2k has a stable distribution with tail indexa.

The result now follows from Lemma A.4 below.

Lemma A.4. Let Y be a stable random variable with tail index a. Then, EMlogD>DNr(Rfor anyr'0.

Proof. Letf(y) denote the probability density of>. From Theorem 3, Chapter

XV, Vol II of Feller (1971), it follows that the probability density of>is bounded

everywhere. Hence, there exists aK(Rsuch thatf(y)(Kfor ally. Further-more, by Theorem 1, Chapter XVII, Vol. II of Feller (1971), we also have ED>De(R for any 0(e(a. Also, since lim

y?=y~e(logy)r"0 and

lim

y?0y1@2(logy)r"0 for any e'0, there exists a C(R such that

sup

@y@:1DyD1@2[logDyD]r(C and sup@y@w1DyD~e[logDyD]r(C. Using all of these

facts, we get

E[logD>D]r"

P

= ~=

[logDyD]rf(y) dy

"

P

@y@:1

[logDyD]rf(y) dy#

P

@y@w1

[logDyD]rf(y) dy

"

P

@y@:1

DyD~1@2(DyD1@2[logDyD]r)f(y) dy

#

P

@y@w1

DyDe(DyD~e[logDyD]r)f(y) dy

)

P

@y@:1

DyD~1@2(DyD1@2[logDyD]r)Kdy

#

P

@y@w1

DyDe(DyD~e[logDyD]r)f(y) dy

)

P

@y@:1

DyD~1@2CKdy#

P

@y@w1

DyDeCf(y) dy

)CK

P

@y@:1

DyD~1@2dy#C

P

= ~=

(22)

Proof of Lemma 1. We"rst prove part (i). It is easy to see from the very de"nition ofX

tthat it is anm-dependent process. The fact thatXthas a stable distribution

with tail indexafollows easily by using the characteristic function (1) and the fact that thee

t are independent.

We next show that (10) holds. To do this, we"rst show that any arbitrary linear combination of X

t also has a stable distribution with tail index a.

Suppose that >

t"+sj/0djXt~j is some arbitrary linear combination

of (X

t is a stable random variable with tail indexa. Next we note

that

k. Since by the Cauchy}Schwarz inequality we have

E(+si/1Z

Cauchy}Schwarz inequality, it is thus enough to show that both E(log>

1)2(Rand E(log>2)2(R. But as shown above,>1 and>2being

linear combinations of (X

l1,2,Xl2k) will both be stable random variables with

tail indexa. Thus, by Lemma A.4 above, E(log>

1)2(Rand E(log>2)2(R

and the proof is complete.

Proof of (ii). It is obvious that X

t as de"ned is an m-dependent non-linear

process. Since thev

t are independent,pt,vt#+mj/1bjvt~j has a stable

distri-bution with tail index h. Also, p

t is independent of et. Thus, by Example h,

Chapter VI.2, Vol. II of Feller (1971),X

thas a stable distribution with tail index

2h.

We now show that (10) holds. To do this, we"rst show that any arbitrary linear combination of X

t, given by>t"+sj/0djXt~j for some integers and

arbitrary coe$cientsd

j, also has a stable distribution with tail index 2h. Let F

tdenote the sigma algebra generated byMvt,vt~1,2Nand/Ythe characteristic

(23)

But, using the characteristic function of normal random variables and the

for some positive coe$cients a

j. Note that by the independence of the v

t, +s`mj/0ajvt~j itself has a stable distribution with tail index h. Hence, by

Theorem 1, Chapter XIII.6, Vol. II of Feller (1971) we get E exp

A

!0.5p2l2s`m+

which is the characteristic function of a stable random variable with tail index 2h. Thus, any linear combination ofX

t is also stably distributed. The rest of the

proof now follows exactly as in the proof of (i) above.

References

Akgiray, V., Lamoureux, C.G., 1989. Estimation of stable-law parameters: a comparative study. Journal of Business and Economic Statistics 7, 85}93.

Andrews, D.W., Monahan, J.C., 1992. An improved heteroscedasticity and autocorrelation consis-tent covariance matrix estimator. Econometrica 60, 953}966.

Chambers, J.M., Mallows, C.L., Stuck, B.W., 1976. A method for simulating stable random variables. Journal of the American Statistical Association 71, 340}344.

CsoKrgo, S., 1987. Testing for stability, in goodness-of-"t. Colloquia Mathematica Societatis Janos Bolyai 45, 101}132.

de Vries, C., 1991. On the relation between GARCH and stable processes. Journal of Econometrics 48, 313}324.

DuMouchel, W.H., 1975. Stable distributions in statistical inference: 2. Information from stably distributed samples. Journal of the American Statistical Association 70, 386}393.

Fama, E., 1965. The behaviour of stock market prices. Journal of Business 38, 34}105.

Feller, W., 1971. An Introduction to Probability Theory and its Applications, Vol. II, 2nd Edition. Wiley, New York.

Hsu, D., Miller, R.B., Wichern, D.W., 1974. On the stable paretian behaviour of stock-market prices. Journal of the American Statistical Association 69, 108}113.

Janson, S., 1984. The asymptotic distributions of incomplete U-statistics. Zeitschrift fuKr Wahrschein-lichkeitstheorie und Verwandte Gebiete 66, 495}505.

Koutrouvelis, I., 1980. Regression type estimation of the parameters of stable laws. Journal of the American Statistical Association 75, 918}928.

Koutrouvelis, I., Bauer, D.F., 1982. Asymptotic distribution of regression-type estimators of para-meters of stable laws. Communications in Statistics-Theory and Methods 2715}2730. Koutrouvelis, I., Kellermeier, J., 1981. A goodness-of-"t test based on the empirical characteristic

(24)

Lee, A.J., 1990. U-Statistics. Marcel Dekker, Inc., New York.

Mandelbrot, B., 1963. The variation of certain speculative prices. Journal of Business 36, 394}419. McCulloch, J.H., 1986. Simple consistent estimators of stable distribution parameters.

Communica-tions in Statistics-Simulation and Computation 15, 1109}1136.

Sen, P.K., 1960. On some convergence properties of U-statistics. Calcutta Statistical Association Bulletin 10, 1}18.

Gambar

Table 1
Table 2i.i.d. observations
Table 3
Table 4
+3

Referensi

Dokumen terkait

To test some of the predictions of the swash exclusion hypothesis, we compared burrowing rates of the three species in five sediment sizes in the laboratory, and

We are no longer interested in inference on the components of the mean vector or autocovariance matrix, but rather on the order of the autoregression in AR(p) models. There is

(1994) (PWY) proposed a resampling method that allows approximation of the distribution of regression quantile estimators to avoid density estimation.. Their algorithm requires

This article describes a framework of analysis that categorizes features of website design in a matrix of business functions versus customer values.. The framework supports

Unexpectedly, we did not observed any effect of death in glutamate-induced neurotoxicity, we further de- L -NNA, a NOS inhibitor, on ERK1 / ERK2 activation in termined the role

Brain microdialysis combined with HPLC and spectroscopic detection was used to monitor extracellular glutamate in the parabrachial nucleus (PBN) of rats during acquisition of

A digital thermometer (2100 Tele-thermometer, amplitudes of these waves were assessed by measuring the YSI, Yellow Springs, OH, USA) was used to measure the peak-to-peak magnitude

When animals received three rather than one conditioning trial, significant FLI was seen not only in the iNTS but also in the parabrachial nucleus (PBN), and the central nucleus of