• Tidak ada hasil yang ditemukan

Directory UMM :Data Elmu:jurnal:J-a:Journal of Econometrics:Vol97.Issue2.Aug2000:

N/A
N/A
Protected

Academic year: 2017

Membagikan "Directory UMM :Data Elmu:jurnal:J-a:Journal of Econometrics:Vol97.Issue2.Aug2000:"

Copied!
20
0
0

Teks penuh

(1)

*Tel.:#44-1392-263241; fax:#44-1392-263242.

E-mail address:[email protected] (G.D.A. Phillips).

An alternative approach to obtaining

Nagar-type moment approximations in

simultaneous equation models

Garry D.A. Phillips

*

School of Business and Economics, University of Exeter, Streatham Court, Exeter EX4 4PU, UK

Received 1 December 1997; received in revised form 1 September 1999; accepted 1 September 1999

Abstract

This paper examines asymptotic expansions for estimation errors expressed explicitly as functions of underlying random variables. Taylor series expansions are obtained from

which"rst and second moment approximations are derived. While the expansions are

essentially equivalent to the traditional Nagar type, the terms are expressed in a form which enables moment approximations to be obtained in a particularly straightforward way once the partial derivatives have been found. The approach is illustrated by

considering the k-class estimators in a static simultaneous equation model where the

disturbances are non-spherical. ( 2000 Elsevier Science S.A. All rights reserved.

JEL classixcation: C13; C22

Keywords: Simultaneous equation models; Non-spherical disturbances; Nagar expan-sions; Bias and second moment approximations

1. Introduction

In an important paper Nagar (1959) analysed the small sample properties of the general k-class of simultaneous equation estimators and, in particular, found expressions for the bias to the order of¹~1where ¹is the number of

(2)

1This work was originally undertaken to clarify this point.

observations, and for the second moment matrix to the order of ¹~2. In

obtaining these results Nagar used an asymptotic expansion for the estimation error of the form:

(a!a)"p~1+

s/1

¹(~1@2)se

s#¹(~1@2)prp, (1) whereais an estimator foraand thee

s,s"1,2,p!1, andrp, the remainder term, are all of stochastic order unity as¹PR. The sum of the retained terms are then assumed to mimic the behaviour of the estimator and the moments of this sum are used to approximate the moments of the estimator. The approach followed by Nagar was not entirely new; see, for example, Kendall (1954), but it was through Nagar's work that econometricians became aware of the potential-ity of the methodology. However, while the approach is of considerable interest, it is not especially easy to understand what is happening within it. In fact, it uses an implicit Taylor series expansion but this is not obvious.1Another character-istic of the method is that it often involves evaluating the expectations of many, often complex, stochastic terms and derivations of moment approximations can be lengthy and tedious.

In this paper an alternative, but equivalent, method of expanding thek-class estimators is presented which"rst expresses the estimator as a function of a set of underlying random variables, e.g. reduced form coe$cient estimates. This is a natural way to proceed and it was the approach adopted by Sargan (1976) who noted that a wide variety of econometric estimators can be regarded as functions of the sample data"rst and second moments. Thus, the problem of approximat-ing the estimation error by means of a Taylor series expansion is placed in a particularly familiar framework whereby the resulting methodology is better understood. Sargan also gave conditions under which the moment approxima-tions are valid. Although the resulting expansion is essentially the same as Nagar's (in fact, Sargan referred to his expansion as a Nagar expansion), it is easier to interpret and its form is such that the task of deriving the moment approximations in the general case is considerably changed. In fact, with this approach, the major part of the analysis is concerned with deriving the relevant partial derivatives; only minimal evaluation of expectations is required. Our interest here is to examine this method of"nding moment approximations

(3)

regression models considerable e!ort has been devoted to analysing the implica-tions for econometric estimation of departures from standard assumpimplica-tions. It is equally important that a similar consideration be given to simultaneous equa-tion models and analysing the"rst and second moments will make a contribu-tion towards this. Hence, the analysis in this paper, as well as expositing an alternative approach to obtaining moment approximations, will considerably extend the Nagar results to models in which both serial correlation and hetero-scedasticity are permitted. It is shown that the bias approximation for this general case may be obtained without assuming that the disturbances are normally distributed; however, normality is required in"nding an approxima-tion to the second moment.

2. Model,notation and general assumptions

We shall consider a simultaneous equation model which includes as its"rst equation

y

1">2b#Z1c#u1, (2)

where y

1 and >2 are, respectively, a ¹]1 vector and a ¹]g matrix of observations on g#1 endogenous variables and Z

1 is a ¹]k matrix of observations onkexogenous variables.bandcare, respectively,g]1 andk]1 vectors of unknown parameters and u1 is a ¹]1vector of stationary distur-bances with positive-de"nite covariance matrix E(u1u@1)"R

1and "nite mo-ments up to fourth order. The complete reduced form of the system includes

>

1"ZP1#<1, (3)

where>

1"(y1:>2) and Z"(Z1:Z2) is a¹]Kmatrix of observations on

Kexogenous variables,P

1"(n1:P2) is aK](g#1) matrix of reduced form parameters and <

1"(v1:<2) is a ¹](g#1) matrix of reduced form distur-bances. The transpose of each row of <

1 has zero mean vector and (g#1)](g#1) positive-de"nite covariance matrix X1"(u

ij) while the (¹](g#1)]1 vector, vec <

1, has a positive-de"nite covariance matrix of dimension (¹](g#1))](¹](g#1)) given by Cov(vec<

1)"Xvec1 and "nite moments up to fourth order. It is further assumed that: (i) The¹]KmatrixZis non-stochastic and of rank K and lim

T?=¹~1Z@Z"RZZ,a K]K positive-de"nite limit matrix.

(ii) Eq. (2) is over-identi"ed so thatK'g#k, i.e. the number of excluded exogenous variables exceeds the number required for the equation to be just identi"ed. In cases where second moments are analysed we shall assume that

(4)

2Later, Sargan (1976) extended his analysis to demonstrate the validity under more general conditions than Nagar's (but including the existence of the corresponding moments).

Sargan (1974) who showed the validity when moments exist,2 and we shall employ them here also even though a formal extension of Sargans results to the more general case considered here has yet to be made. However, when `

mo-mentsa do not exist it does not follow that `momenta approximations are

worthless. They may be viewed as pseudo-moments, i.e moments of distribu-tions that approximate the one of interest; see, for example, Phillips (1983).

3. Preliminary results

For ease of exposition we shall introduce the proposed approach by "rst considering the two-stage least-squares (2SLS) estimators of the unknown parameters of (2) which are given by

A

bK the least-squares reduced form residuals are orthogonal toZ. This representa-tion of 2SLS was considered by Harvey and Phillips (1980). From (5) it is seen that, conditional on the exogenous variables, the 2SLS estimators depend only onPK 1"(n(

1:PK 2). Hence, on putting

a("(bK@,c(@)@, (6)

we may write

a("f(vecPK 1), (7)

wherevecPK 1is obtained by stacking the columns ofPK 1andfis a vector valued function. Next note that on substituting>

(5)

rearrang-3Shao employs more restrictive assumptions than is done here. Heteroscedasticity of disturbances is allowed but not dependence.

Consider the hypothetical ordinary least-squares (OLS) estimator in (8)

a8"(X@X)~1X@y

1. (10)

This estimator in (10) is clearly unbiased and soE(a8)"awhere

E(a8)"(X@X)~1X@E(y

1)"(X@X)~1X@Zn1

"

A

P@2Z@ZP2 P@2Z@Z1

Z@1ZP2 Z@1Z

1

B

~1

A

P@2Z@Zn1

Z@1Zn1

B

, (11) on using (9). It follows that

E(a8)"a"f(vecP1).

We have thus established the important result thata(, the 2SLS estimator ofa, can be written in the forma("f(vecPK 1) wherea"f(vecP1).

4. The proposed approach

For simplicity we shall examine theith component of (6) which we write in the form

a(

i"fi(vecPK 1), i"1, 2,2,g#k. (12)

It is assumed that, conditional on the exogenous variables, the functionf

i(.) is di!erentiable with uniformly bounded derivatives up to fourth order in a neigh-bourhood ofvecP1as¹PR, and that the components ofvecPK 1have"nite moments up to, at least, fourth order. This ensures that the approximations, based on the Taylor expansion, will have an error with an order of magnitude which is well determined. These assumptions are similar to those proposed by Sargan (1976, p. 430), but see also Shao (1988) who gives conditions for determining the order of the error term when approximating the moments of real-valued smooth functions of regression coe$cient estimates,3 a situation which has much in common with that examined here. Expanding the function in a Taylor series expansion about the pointvecP

1 yields the result

f

i(vecPK 1)"fi(vecP1)#(vec(PK 1!P1))@f(1)i

#1

(6)

#1 third-order partial derivatives de"ned as

f(3) i,rs"

Lf(2)i

Lnrs, r"1,2,K, s"1,2,g#1. All the derivatives are evaluated at PK1"P

1. The term vec(PK 1!P1) is It is of interest to compare (13) with the counterpart employed by Nagar. Since (13) is an asymptotic expansion in which the estimation error is expanded in terms of stochastic order¹~1@2~1and¹~3@2, with a remainder term which is o

1(¹~3@2), it must be equivalent to the Nagar expansion in which successive

terms are of the same stochastic order. The corresponding terms will be the same even though they look quite di!erent. To see this, consider the estimation error in (4) which can be written as

a(!a"

A

Nagar's analysis commences by putting

(7)

a(

i!ai"e@i(a(!a) where ei is a (g#k)]1 unit vector with ith component unity. Upon expanding the inverse matrix in (16), neglecting terms of smaller stochastic order than¹~3@2and substituting from (17), we have the asymptotic expansion:

which is the"rst term in (13). By using the same approach, the O

1(¹~1) part of

(18) which is given by the second, third and fourth terms, can be shown to equal the second term in (13) while the remaining terms which are O

1(¹~3@2) will

(8)

parameters associated with the excluded exogenous variables in the included endogenous variable reduced form equations as local to zero so thatbin (2) is nearly non-identi"ed. They then use local to zero asymptotics in a model which is more restricted than the one assumed here, to show that conventional asymptotics fail even in large samples whereby 2SLS may be badly biased. It is, of course, of interest to consider the identi"cation of the parameters in the case examined here when instruments are weak and perhaps the approach taken here will be helpful in doing this. However, such issues lie outside the scope of the present paper.

5. The bias and moment matrix of the 2SLS estimator

Noting that the third term in (13) is O

1(¹~1), it is clear that the bias to

O(¹~1)is obtained by taking the expectations of the second (which has mean

zero) and third terms. Using (14) the bias approximation is

E(f

i(vec(PK 1))!fi(vec(P1))"E(vec(PK 1!P1))@f(1)i

#E

A

1

2!(vec(PK 1!P1)

B

@

f(2)i (vec(PK 1!P

1))

"1

2!trMf(2)i (I?(Z@Z)~1Z@)X7%#1 (I?Z(Z@Z)~1)N

#o(¹~1). (19)

Substituting for the partial derivatives from Appendix A yields the following result:

Theorem 1. Leta(

ibe the ith component of the g#k component 2SLS estimatora( as

given in (6), then the bias ofa(

i to order¹~1is given by

E(a(

i!ai)"trM[H@(X@X)~1eib@0?(PZ!PX)]Xvec1

!IH[X(X@X)~1e

ib@0?H@(X@X)~1X@]Xvec1 N

#o(¹~1), (20)

whereIHis a(¹(g#1))](¹(g#1))commutation matrix, see for example, Magnus

and Neudecker (1979), which is partitioned into ¹(g#1) submatrices of order

(¹,g#1)such that the p, qth submatrix has unity in its q, pth position and zeroes

elsewhere, p"1;...; g#1, q"1, ..,T, and where the other terms are all dexned in

(9)

4This is because the trace of a Kronecker product is the product of the traces. 5Actually their result is not quite as general as this. Here we use a simple extension.

Nagar (1959) considered the case of serially independent and homoscedastic structural disturbances which restrictsXvec1 to the form

Xvec1 "X

1?IT, (21)

whereX1is a (g#1)](g#1) covariance matrix of the rows of<1given in (3) while Phillips (1978) examined the case of structural disturbances generated by the same autoregressive process which again leads to anXvec

1 in matrix product form withXvec1 "X

1?RwhereCov(uj)"p2jR,j"1, 2,2,G.

If (21) is substituted into (20), Nagar's result is readily obtained. When

Xvec1 can be written in matrix product form the bias expression in (20) can be reduced to simple form.4Notice that Theorem 1 does not require normality for disturbances. This assumption was employed by Nagar and by Kadane (1971) but it is seen to be unnecessary in deriving the bias to O(¹~1). It will be used,

however, when deriving the second moment to O(¹~2). Before obtaining this

result we note the following:

¸emma. If g is a random normal vector with mean zero and positive-dexnite

covariance matrixti.e.,g&N(0,t),and if A and B are any conformable matrices,

then

E(g@Ag)(g@Bg)"(tr At)(tr Bt)#tr AtBt#tr AtB@t. (22) A proof of this lemma appears, for example, in Magnus and Neudecker5 (1979) and it is used to derive the expectation of the square of the last three terms in (13) where only terms up to O(¹~2) are retained. We thus consider

E(a(

i!ai)2"E

G

(vec(PK 1!PK 1))@f(1)i # 1

2!(vec(PK 1!P1))@f(2)i (vec(PK 1!P1))

#1

3! K

+

r/1 g`1

+

s/1 (n(

rs!nrs)(vec(PK 1!P1))@f(3)i,rs(vec(PK 1!P1))

H

2

#o(¹~2)

"E

G

((vec(PK 1!P

1))@f(1)i )2#

A

1

2!(vec(PK 1

!P

1))@f(2)i (vec(PK 1!P1))

B

2

(10)

#E

G

2

In obtaining the expression in (23) we have ignored certain cross product terms on the grounds that the expected value of a product of an odd number of normal random variables with mean zero is itself zero. Using (14), we may write the"rst two terms in (23) as: where the second term in (24) is obtained by applying the lemma. To complete the derivation of the second moment to O(¹~2) we shall need to consider the

(11)

straightforward and so is omitted. It was shown in Section 4 that the "rst stochastic term in (13) is

(vec(PK 1!P

1))@f(1)i "(vec<1)@(b0?XQei). (26) The expected value of the product of (25) and (26) yields the last term of (23) so we shall need to evaluate this in deriving the second moment of 2SLS to O(¹~2). Note that the scalarn(

rs!nrs is given by unity and all other components zero. In fact, it will be convenient to write

eH

r`(s~1)K in the form eHr`(s~1)K"eHs?eHr where eHs is a (g#1)]1 unit vector withsth component unity andeH

r is aK]1 unit vector with therth component unity. We may then rewrite (27) as

n(

rs!nrs"(e{sH?er{H(Z@Z)~1Z@)vec<1. (28) Using (25)}(28), the term of interest in (23) is just the expected value of

K

The expected value of (29) may now be written as

(12)

6Notice that we do not include members of thek-class for whichk'1. Such estimators are known not to possess moments, see Kinal (1980). It is well known that the limited information maximum likelihood (LIML) estimator, which corresponds to a stochastic choice forkwithk'1, does not possess moments.

Adding (24) and (31) after substituting into (24) forf(1)

i andf(2)i from Appendix A yields:

Theorem 2. Leta(

ibe the ith component of theg#kcomponent 2SLS estimatora( as

given in(6),then the second moment ofa( toO(¹~2)is given by

E(a(

i!ai)2"trM(b0b@0?XQeie@iQX@)Xvec1 N

#trM([H@Qe

ib@0?(PX!PZ)]!IH[XQeib@0?H@QX@])Xvec1 ]([H@Qeib@0?(P

X!PZ)]!IH[XQeib@0?H@QX@]

#[H@Qe

ib@0?(PX!PZ)!IH(XQeib@0?H@QX@)]N)Xvec1 N

#Mtr([H@Qe

ib@0?(PX!PZ)]!IH[XQeib@0?H@QX@])Xvec1 N2

#2+K

r/1 g1`1

+

s/1 M(tr A

rsXvec1 )(tr BrsXvec1 )#tr ArsXvec1 BrsXvec1

#tr A

rsXvec1 B@rsXvec1 N#o(¹~2). (32) The o!-diagonal components of the second moment matrix are easily ob-tained using a minor variation of this approach. Notice that by working with the expansion in (13), we have obtained the moment approximations with relatively little e!ort. The value of the approach adopted here, as opposed to that used by Nagar (who analysed the moments for the complete vectora(), is that the terms are organised in a way which is particularly suitable for the required analysis. The foregoing results are easily extended to other k-class estimators6 where 0(k)1is non-stochastic. By suitably manipulating these estimators, it is possible to express them in the same form as (5) so that no new analysis is required to"nd approximations to the"rst and second moments. Results for the

k-class which corrrespond to those in Theorems 1 and 2 for 2SLS are readily found. Replacing the matrixP

(13)

7One is tempted to believe that the more e$cient the least-squares estimation ofvecPK1 the smaller the bias and second moment will be to the order of the approximation but it has not been possible to show this. However, the asymptotic variance ofa(

iwill depend directly on the asymptotic covariance matrix ofvecPK1; see (26).

8This will depend on what is assumed about the process generating the structural disturbances since that will also determine the way in which the reduced form disturbances are generated. From the relationship<";B~1it follows thatCov(vec<) can be estimated consistently whenever the covariances between the rows of U can be. This will be possible, for example, when the structural disturbances are generated by a vector autoregression.

Cov(vec(PK 1!P

1)). However, it does not seem possible to make general statements about the way the moments vary withXvec1 unless some structure,7 e.g. Kronecker product form, is imposed onXvec1 . One inhibiting factor in doing this for the bias is the inde"niteness off(2)

i , see A(15) and (19). Of course, having the general expressions will always enable a numerical comparison of the approximate moments under di!erent structures. One obvious reason for deriv-ing the coe$cient estimator bias to O(¹~1) is to facilitate the construction of

a less biased estimator. This can be done by subtracting from the estimator an estimate of the bias whose expected value equals the bias to O(¹~1). Examining

(20) it is apparent that a bias correction requires a consistent estimate of

P2which with Z forms X, see (9), and consistent estimates of b, which is contained in b0, and Xvec

1 (or ofCov(vec(PK 1!P1)), see (19)). Consistent esti-mates ofP

2andbmay be found directly using OLS and 2SLS, respectively, but consistent estimation ofCov(vec(PK 1!P

1))may be more di$cult even though the model involved is just a system of regression equations containing the same explanatory variables.8Of course, under departures from classical assumptions it may not be appropriate to merely correct for bias since a di!erent approach to estimation, e.g. generalised 2SLS or some approximation to it, may be preferred. Note that a variance approximation to order¹~2is obtained when the squared

bias is subtracted from the second moment. If the expectation of a variance estimator to order ¹~2is compared to the approximation given here, the

di!erence represents the bias to the same order. In fact, Kiviet and Phillips, 1999 use this to show that under standard assumptions the usual 2SLS variance estimator is, in general, upwards biased.

6. Conclusion

In this paper we have derived expressions for the bias and second moments of

(14)

understood and the resulting Taylor expansion is easily interpreted. Further, more, it leads to the derivation of results which are much more general than those of Nagar (1959) and Phillips (1978). In addition, there are obvious extensions of this work; in particular, to generalised least-squares procedures and to higher order moment approximations.

A feature of the paper is that the expansions involve the derivatives of the functions with respect to the underlying random variables so that the analysis required to"nd moment approximations is di!erent from that used or the standard Nagar expansion. As we have seen, with the approach developed here, the major work is concerned with"nding the relevant deriva-tives with little e!ort required to evaluate expectations whereas in the direct application of Nagar's method large numbers of expectations have to be evaluated.

Acknowledgements

Generous assistance by Jan Magnus and Heinz Neudecker in deriving the results in Appendix A is gratefully acknowledged. Helpful comments and criti-cisms were received from referees and from Gordon Fisher, David Giles, Nanak Kakwani, Essie Maasoumi, Adrian Pagan, Tom Rothenberg, Chiang Tsiao and Aman Ullah. I also bene"ted from discussions with Rob Engle.

Appendix A

A.1. Derivation of partial derivativesf(1)

i andf(2)i We shall write the estimator (4) in the form

a("(XK @XK )~1XK @y(, (A.1)

XK "(ZPK 2FZ

1),y("Zn(1. (A.2)

To"nd the"rst- and second-order partial derivatives ofa( w.r.t.vecPK 1, we"rst

"nd the total derivatives viz;

(15)

Now,

d(XK @XK )~1XK @"!(XK @XK )~1d(XK @XK )(XK @XK )~1XK @#(XK @XK )~1(dXK )@

"!(XK @XK )~1(dXK )@XK (XK @XK )~1XK @

!(XK @XK )~1XK @(dXK )(XK @XK )~1XK @#(XK @XK )~1(dXK )@

"(XK @XK )~1(dXK )@[I!XK (XK @XK )~1XK @]

!(XK @XK )~1XK @(dXK )(XK @XK )~1XK @, (A.5) and substituting from (A.5) into (A.3), using (A.1) and re-arranging yields

da("(XK @XK )~1(dXK )@[I!XK (XK@XK )~1XK @]y(#(XK @XK )~1XK @[dy(!(dXK )a(]. (A.6) We may write

dy(!(dXK )a("Zn(

1!(Z(dPK 2)F0)

A

bK c(

B

"Z(dPK 1)bK0 (A.7)

withbK0"(1,!bK@)@. Evaluating atP1, we may put (I!X(X@X)~1X@)y6"0 and b0"(1,!b@)@. Making these substitutions into (A.6) and using a result noted in Neudecker (1969, p. 956), we have

Lveca(

LvecPK 1"b0?Z@X(X@X)~1, (A.8)

f(1) i "

La(

i

LvecPK 1"b0?Z@X(X@X)~1ei, (A.9)

wheree

iis a (k#g)]1 unit vector. Using (A.4) and substituting from (A.5), we have

d2a("d[(XK @XK )~1(dXK )@](I!XK (XK @XK )~1XK @)y( #(XK @XK )~1(dXK )@d[(I!XK (XK @XK )~1XK )y(]

#d[(XK @XK )~1XK @](dy(!(dXK )a()#(XK @XK )~1XK @d[dy(!(dXK )a(]

"d[(XK @XK )~1(dXK )@](I!XK (XK @XK )~1XK @)y(#(XK @XK )~1(dXK )@dy( !(XK @XK )~1(dXK )@(dXK )a(!(XK @XK )~1(dXK )@XK da(

#(XK @XK )~1(dXK )@[I!XK (XK @XK )XK @](dy(!(dXK )a()

!(XK @XK )~1XK @(dXK )(XK @XK )~1XK @(dy(!(dXK )a()

(16)

Noting that

d(dy(!(dXK )a()"!(dXK ) da(, (A.11)

substituting from (A.7) and (A.11) into (A.10), and re-arranging yields

d2a("d[(XK @XK )~1(dXK )@](I!XK (XK @XK )~1XK @)y(#(XK @XK )~1(dXK )@Z(dPK 1)bK0 !(XK @XK )~1(dXK )@XK (XK @XK )~1(dXK )@[I!XK (XK @XK )~1XK @]y(

!(XK @XK )~1(dXK )XK (XK @XK )~1XK @Z(dPK 1)bK0 #(XK @XK )~1(dXK )@[I!XK (XK @XK )~1XK @]Z(dPK 1)bK0 !(XK @XK )~1(dXK )XK (XK @XK )~1XK @Z(dPK 1)bK0

!(XK @XK )~1XK @(dXK )(XK @XK )~1(dXK )@(I!XK (XK @XK )~1XK @)y( !(XK @XK )~1XK @(dXK )(XK @XK )~1XK @Z(dPK 1)bK0

"2(XK @XK )~1(dXK )@[I!XK (XK @XK )~1XK @]Z(dPK 1)bK0 !2(XK @XK )~1XK @(dXK )(XK@XK )~1XK Z(dPK 1)bK0

#terms involving (I!XK (XK @XK )~1XK @)y(. (A.12) Using the fact that

dXK "(Z(dPK 2)F0)"Z(dPK 2)[I

gF0],

where the zero matrix isg]kand evaluating (A.12) at PK 1"P

1, we have

d2a"2(X@X)~1

A

Ig

0

B

(dP2)@Z@(I!X(X@X)~1X@)Z(dP1)b0

!2(X@X)~1XZ(dP

2)[IgF0](X@X)~1X@Z(dP1)b0. Noting also that

A

Ig

0

B

(0Ig)(dP1)@"

A

I

g

0

B

(dP2)@, and puttingH"

A

0 I

g 0 0

B

, and

M"Z@(I!X(X@X)~1X@)Z, we may write 1

2d2a"(X@X)~1H(dP1)@M(dP1)b0!(X@X)~1

X@Z(dP1)H@(X@X)~1X@Z(dP1)b0. (A.13) Theith component of this vector is

1

2d2ai"12e@i(d2a)"e@i(X@X)~1H(dP1)@M(dP1)b0!e@i(X@X)~1X@Z(dP1)

H@(X@X)~1X@Z(dP1)b0

"(vecdP

(17)

?M)(vecdP

1)!(vecdP1)@[IH(Z@X(X@X)~1eib@0

?H@(X@X)~1X@Z)](vecdP 1), whereIHis aK(g#1)]K(g#1) commutation matrix.

With minor rearrangement we have,"nally,

1

2d2ai"(dvecP1)@(H@?I)[(X@X)~1

?M!M(X@X)~1X@Z?Z@X(X@X)~1NI@H] (e

i?I)(b@0?I)(dvecP1) (A.14) from which we may deduce, see Neudecker (1969, p. 957),

f(2)1 "

R2a

i (RvecP

1)(LvecP1)@

"M(H@?I)[(X@X)~1?M!((X@X)~1X@Z

?Z@X(X@X)~1)I@H](e

i?I)(b@0?I)N

#M(H@?I)[(X@X)~1?M!M(X@X)~1X@Z

?Z@X(X@X)~1NI@H](e

i?I)(b@0?I)N@. (A.15)

Appendix B

B.1. Derivation of third-order derivatives

To obtainf(3) i "

L LvecPK 1

A

L2a(

i

(LvecPK i)(LvecPK 1)@

B

PK 1"P1, we proceed by di! er-entiating f(2)

i evaluated at vecPK 1 with respect to the elements of vecPK 1. We commence from

fK(2)i " L2a(i

(LvecPK 1)(LvecPK 1)@"MH@(XK @XK )~1eibK@0?Z@(I!XK (XK @XK)~1X@)Z

!IH(Z@XK (XK @XK )~1e

ib@o?H@(XK @XK )~1XK @Z)

!(H@(XK @XK )~1e

iy(@(I!XK (XK @XK )~1XK @)

?Z@XK (XK @XK )~1H)IH!H@(XK @XK )~1H

?Z@XK (XK @XK )~1e

iy(@(I!XK (XK @XK )~1XK @)N

#M...N@, (B.1)

(18)

The total derivative offK (

i2)is given by dfK(2)

i "MH@(XK @XK )~1eibK@0?d[Z@(I!XK (XK @XK )~1XK @)Z]

#d[H(XK @XK )~1e

ibK@0]?Z@(I!XK (XK @XK )~1XK @)Z

!IH(Z@XK (XK @XK )~1e

ibK@0?d[H@(XK @XK )~1XK @Z]

#d[Z@XK (XK @XK )~1e

ibK@]?H@(XK @XK )~1XK @Z)

!(H(XK @XK )~1e

iy(@(I!XK (XK @XK )~1XK @)Z?d[Z@XK (XK @XK )~1H]

!d[H@(XK @XK )~1e

iy(@(I!XK (XK @XK )~1XK @)Z]?Z@XK (XK @XK )~1H)IH

!H@(XK @XK )~1H?d[Z@XK(XK @XK )~1e

iy(@(I!XK (XK @XK )~1XK @)]

!d[H@(XK @XK )~1H]?Z@XK (XK @XK )~1e

iy(@(I!XK (XK @XK )~1XK @)N

#M...N@. (B.2)

Expression (B.2) will subsequently be evaluated at P

1, whereupon (I!XK (XK @XK )~1XK @)y("0. Hence, we shall ignore terms which involve this vector. To proceed we use the following:

d[Z@(I!XK (XK @XK )~1XK @)Z]"!Z@XK (XK @XK )~1(dXK )@(I!XK (XK @XK )~1XK @)Z

!Z@(I!XK (XK @XK )~1XK @)(dXK )(XK @XK )~1XK @Z,(B.3) d[H(XK @XK )~1e

ib@0]"H@(XK @XK )~1eidbK@0#d[H@(XK @XK )~1ei]bK@0

"!H@(XK @XK )~1(dXK )@(I!XK(XK @XK )~1X@y(

!H(XK @XK )~1XK @(dy(!(dXK )a() (B.4) d[H@(XK @XK )~1X@Z]"H(XK @XK )~1(dXK )@[I!XK (XK @XK )~1XK @]Z

!H(XK @XK )~1XK @dXK (XK @XK )~1XK @Z. (B.5) On substituting from (B.3)}(B.5) into (B.2), using the result dXK "Z(dPK 1)H@and evaluating (B.2) atP

1, we obtain df(2)

i "M[H@(X@X)~1eib@0?(Z@X(X@X)~1H(dP)@M

#M@(dP1)H@(X@X)~1X@Z)]![(H@(X@X)~1e

ib@0(dP1)@Z@X(X@X)~1H

#H@(X@X)~1X@Z(dP1)H@(X@X)~1e

ib@0#H(X@X)~1

H(dP1)@Z@X(X@X)~1e

ib@0)

?M]#([(!H@(X@X)~1H(dP1)@M

#H@(X@X)~1X@Z(dP

1)@H@(X@X)~1X@Z)?Z@X(X@X)~1eib@0]

#[H@(X@X)~1X@Z?(Z@X(X@X)~1e

(19)

!M(dP

1)H@(X@X)~1eib@0!Z@X(X@X)~1H(dP1)@Z@X(X@X)~1

e

ib@0])IH

!IH(Z@X(X@X)~1X@H?H@(X@X)~1e

ib@0(dPi)@M)

!H@(X@X)~1H?Z@X(X@X)~1e

i(dPi)@MN

#M...N@. (B.6)

Finally, using the theorems in Neudecker (1969, p'957), we have

Lf(2)i

Lnrs"MH@(X@X)~1eib@0?(Z@X(X@X)~1HEsrM#ErsH@(X@X)~1X@Z)

!(H@(X@X)~1e

ib@0EsrZ@X(X@X)~1H

#H@(X@X)~1X@ZE

rsH@(X@X)~1eib@0

#H@(X@X)~1HE

srZ@X(X@X)~1eib@0)?M

![(H@(X@X)~1HE

srM#H@(X@X)~1X@ZErsH@(X@X)~1X@Z)

?Z@X(X@X)~1e

ib@0]IH

!IH(Z@X(X@X)~1H

?H@(X@X)~1e

ib@0EsrM)!H@(X@X)~1H?Z@X(X@X)~1eib@0EsrMN

#M...N@; r"1, 2,2,K,

s"1, 2,2,g#1, (B.7)

whereE

rsis a matrix with zeroes everywhere except ther,sth position andnrsis ther,sth component ofP

1. TheK2(g#1)2]K(g#1) matrixfi(3)is formed by stacking theK(g#1) submatrices above whereLf(2)

i /Lnrs is ther#(s!1)Kth submatrix.

References

Buse, A., 1992. The bias of instrumental variable estimators. Econometrica 60, 173}180. Dufour, J.-M., 1997. Some impossibility theorems in econometrics with applications to structural

and dynamic models. Econometrica 65, 1365}1387.

Harvey, A.C., Phillips, G.D.A., 1980. Testing for serial correlation in simultaneous equation models. Econometrica 48, 747}759.

Kadane, J., 1971. Comparison of k-class estimators when the disturbances are small. Econometrica 39, 723}737.

Kendall, M.G., 1954. Note on the bias in the estimation of autocorrelation. Biometrika 61, 403}404. Kinal, T.W., 1980. The existence of moments of k-class estimators. Econometrica 48, 643}652. Kiviet, J.F., Phillips, G.D.A., 1999. The bias of the 2SLS variance estimator. University of Exeter,

(20)

Magnus, J.R., Neudecker, H., 1979. The commutation matrix: some properties and applications. Annals of Statistics 7, 381}394.

Nagar L, A., 1959. The bias and moment matrix of the general k-class estimator of the parameters in simultaneous equations. Econometrica 27, 575}595.

Nelson, C.R., Startz, R., 1990. Some further results on the exact"nite sample properties of the instrumental variable estimator. Econometrica 58, 967}976.

Neudecker, H., 1969. Some theorems in matrix di!erentiation with special reference to Kronecker matrix products. Journal of the American Statistical Association 64, 953}963.

Phillips, G.D.A., 1978. The bias and moment matrix of the general k-class estimator of the parameters in simultaneous equations when disturbances are serially correlated: some particular cases. Paper presented to ESEM Geneva.

Phillips, G.D.A., 1999. An alternative approach to obtaining Nagar-type moment approximations in simultaneous equation models. University of Exeter, School of Business and Economics Dis-cussion Paper 99/05.

Phillips, P.C.B., 1983. Exact small sample theory in the simultaneous equations model. Handbook of Econometrics (I), 451}561.

Phillips, P.C.B., 1989. Partially identi"ed models. Journal of Econometric Theory 5, 181}240. Sargan, J.D., 1974. On the validity of Nagars expansion for the moments of econometric estimators.

Econometrica 42, 169}176.

Sargan, J.D., 1976. Econometric estimators and the edgeworth approximation. Econometrica 44, 421}448.

Shao, J., 1988. On resampling methods for variance and bias estimation in linear models. Annals of Statistics 16 (3), 986}1008.

Referensi

Dokumen terkait

Data untuk penelitian ini diperoleh melalui penyebaran kuesioner dan hasil dari penelitian ini menunjukan adanya pengaruh positif organization learning terhadap competitive

Pokja Pengadaan Jasa ULP II PSBR Naibonat Kupang pada Kementerian Sosial RI akan melaksanakan Pemilihan Langsung dengan pascakualifikasi secara elektronik untuk

Apabila Perusahaan yang diundang tidak hadir sesuai dengan waktu yang telah ditetapkan pada saat pembuktian kualifikasi akan dinyatakan gugur.. Demikian undangan pembuktian

Evaluasi harga dilakukan kepada penawaran yang memenuhi persyaratan administrasi dan persyaratan teknis. Evalusi dilakukan dengan koreksi aritmatik dengan hasil evaluasi

modul Material Management (MM) dan modul Finance (FI) dengan menggunakan SAP ECC, dapat saling berintegrasi secara lengkap untuk menangani proses bisnis pada

Dalam Pasal 1 angka 2 disebutkan bahwa Koperasi Jasa Keuangan Syariah, selanjutnya disebut KJKS adalah Koperasi yang kegiatan usahanya bergerak di bidang pembiayaan, investasi,

Menunjukan Berita Acara Hasil Evaluasi Dokumen Penawaran Administrasi, Teknis, dan Harga nomor : PL.104 / 2 / 8 / ULP.BTKP-17 tanggal 26 Juli 2017, maka bersama ini kami

[r]