• Tidak ada hasil yang ditemukan

A multivariate growth curve model with di¨ering numbers of random e¨ects

N/A
N/A
Protected

Academic year: 2024

Membagikan "A multivariate growth curve model with di¨ering numbers of random e¨ects"

Copied!
12
0
0

Teks penuh

(1)

30 (2000), 525±536

A multivariate growth curve model with di¨ering numbers of random e¨ects

Takahisa Yokoyama

(Received July 22, 1999) (Revised March 13, 2000)

ABSTRACT. In this paper we consider a multivariate growth curve model with covariates and random e¨ects. The model is a mixed MANOVA-GMANOVA model which has multivariate random-e¨ects covariance structures. Test statistics for a general hy- pothesis concerning the adequacy of a family of the covariance structures are proposed.

A modi®ed LR statistic for the hypothesis and its asymptotic expansion are obtained.

The MLE's of unknown mean parameters are obtained under the covariance structures.

The e½ciency of the MLE is discussed. A numerical example is also given.

1. Introduction

Suppose that we obtain repeated measurements of mresponse variables on each of p occasions (or treatments) for each of Nindividuals and that we can use observations of r covariates for each individual. Let x…g†j be an mp-vector of measurements on the j-th individual in the g-th group arranged as

x…g†j ˆ …x…g†11j;. . .;x…g†1mj;. . . .;x…g†p1j;. . .;x…g†pmj†0;

and assume that x…g†j 's are independently distributed as Nmp…m…g†j ;W†, where W is an unknown mpmp positive de®nite matrix, jˆ1;. . .;Ng, gˆ1;. . .;k.

Further, we assume that mean pro®les of x…g†j are m-variate growth curves with r covariates, i.e.,

m…g†j ˆ …B0nIm†x…g†‡Y0c…g†j ;

where B is a qp within-individual design matrix of rank q …Up†, B0nIm is the Kronecker product of B0 and themmidentity matrix, c…g†j 's are r-vectors of observations of covariates, x…g†'s aremq-vectors of unknown parameters, Yis an unknown rmp parameter matrix. Let

Xˆ ‰x…1†1 ;. . .;x…1†N1;. . . .;x…k†1 ;. . .;x…k†NkŠ0; NˆN1‡ ‡Nk:

2000 Mathematics Subject Classi®cation. 62H10, 62H12.

Key words and phrases. multivariate growth curve model, multivariate random-e¨ects covariance structure, likelihood ratio statistic, maximum likelihood estimator.

(2)

Then the model of X can be written as

X@NNmp…AX…BnIm† ‡CY;WnIN†;

…1:1†

where

1N1 0

...

0 1Nk

0 B@

1 CA

is an Nk between-individual design matrix, 1n is an n-vector of ones, Cˆ ‰c…1†1 ;. . .;c…1†N1;. . . .;c…k†1 ;. . .;c…k†NkŠ0 is a ®xed Nr matrix of covariates, rank ‰A;CŠ ˆk‡r …UNÿp†, Xˆ ‰x…1†;. . .;x…k†Š0 is an unknown kmq parameter matrix. Without loss of generality, we may assume that BB0ˆIq. The mean structure of (1.1) is a mixed MANOVA-GMANOVA model, and the GMANOVA portion is an extension of Pottho¨ and Roy [5] to the multiple- response case. For the model (1.1) in the single-response case …mˆ1†, see Yokoyama and Fujikoshi [10] and Yokoyama [12]. This type of models has been discussed by Chinchilli and Elswick [2], Verbyla and Venables [9], etc.

For a comprehensive review of the literature on such models, see, e.g., von Rosen [7] and Kshirsagar and Smith [3, p. 85].

In this paper we consider a family of covariance structures Wsˆ …Bs0nIm†Ds…BsnIm† ‡IpnSs; 0UsUq;

…1:2†

which is based on random-coe½cients models with di¨ering numbers of random e¨ects (see Lange and Laird [4]), where Ds and Ss are arbitrary msms positive semi-de®nite andmmpositive de®nite matrices respectively,Bs is the matrix which is composed of the ®rst s rows of B. This family is a gen- eralization of a multivariate random-e¨ects covariance structure proposed by Reinsel [6]. In fact, the covariance structures (1.2) can be introduced by assuming the following model:

x…g†j ˆm…g†j ‡ …Bs0nIm†h…g†j ‡e…g†j ;

where h…g†j is an ms-vector of random e¨ects distributed as Nms…0;Ds†, e…g†j is an mp-vector of random errors distributed as Nmp…0;IpnSs†, h…g†j 's and e…g†j 's are mutually independent. This implies that V…x…g†j † ˆWs. In § 2 we derive a canonical form of the model (1.1). A test statistic for testing H0s:WˆWs vs.

H1s: not H0s in the model (1.1) has been proposed by Yokoyama [13]. In § 3 we propose test statistics for the hypothesis

H0s:WˆWs vs: H1t:WˆWt …1:3†

in the model (1.1), where 1Us<tUq. Since the exact likelihood ratio

(3)

…ˆLR† statistic for the hypothesis is complicated, it is suggested to use a modi®ed LR statistic, which is the LR statistic for a modi®ed hypothesis. An asymptotic expansion of the null distribution of the statistic is obtained. By making this strong assumption that WˆWs, we can expect to have e½cient estimators. In § 4 we obtain the maximum likelihood estimators (ˆMLE's) of unknown mean parameters under the covariance structures (1.2). In com- parison with the MLE of X when no special assumptions about W are made, we show how much gains can be obtained for the maximum likelihood es- timation of X by assuming that W has the structures (1.2). In § 5 we give a numerical example of the results of § 4.

2. Canonical form of the model

In order to transform (1.1) to a model which is easier to analyze, we use a canonical reduction. We de®ne the submatrices Bÿt and BÿsVt of B by Bˆ

‰Bt0;Bÿt 0Š0,Btˆ ‰Bs0;BÿsVt0Š0. LetB be a…pÿq† pmatrix such thatBB0ˆIpÿq

and BB0ˆ0. Then Gˆ ‰Bs0;BÿsVt0;Bÿt 0;B0Š0ˆ ‰G10;G20;G30;G40Š0 is an orthogo- nal matrix of order p, where G10 ˆ ‰g…1†1 ;. . .;g…s†1 Š0: ps, G20 ˆ ‰g…1†2 ;. . .;g…tÿs†2 Š0: p …tÿs†,G30ˆ ‰g…1†3 ;. . .;g…qÿt†3 Š0:p …qÿt†,G40ˆ ‰g…1†4 ;. . .;g…pÿq†4 Š0: p …pÿq†.

Therefore, QˆGnImˆ ‰Q10;Q20;Q30;Q40Š0ˆ ‰Q…1†1 0;. . .;Q…s†1 0;Q…1†2 0;. . .;Q…tÿs†2 0; Q…1†3 0;. . .;Q…qÿt†3 0; Q…1†4 0;. . .;Q…pÿq†4 0Š0 is an orthogonal matrix of order mp. Further, let H ˆ ‰H1;H2Š be an orthogonal matrix of order N such that H1 is an orthonormal basis matrix on the space spanned by the column vectors of C. Then, letting Y ˆH20XQ0ˆ ‰Y1;Y2;Y3;Y4Š ˆ ‰Y1…1†;. . .;Y1…s†;Y2…1†;. . .; Y2…tÿs†;Y3…1†;. . .;Y3…qÿt†;Y4…1†;. . .;Y4…pÿq†Š, ‰Y1;Y2;Y3Š ˆY…123† and ‰Y2;Y3Š ˆ Y…23†ˆ ‰Y…23†…1†;. . .;Y…23†…qÿs†Š, the model (1.1) can be reduced to a canonical form

H0XQ0ˆ Z

Y…123† Y4

@NNmp m AX~ 0

;CnIN

; …2:1†

where

mˆH10A‰X;0Š ‡H10CYQ0; A~ˆH20A;

C ˆQWQ0ˆ

C11 C12 C13 C14

C21 C22 C23 C24 C31 C32 C33 C34

C41 C42 C43 C44 0

BB BB

@

1 CC CC A:

Here we note that …Y;X† is an invertible function of …m;X†. In fact,Ycan be expressed in terms of m and X as

Yˆ …H10ÿ1mQÿ …H10ÿ1H10AX…BnIm†:

…2:2†

(4)

3. Tests for a family of covariance structures

We consider the LR test for the hypothesis (1.3) in the multivariate growth curve model (1.1). This is equivalent to considering the LR test for the hypothesis

H0s:C ˆ Ds‡IsnSs 0 0 IpÿsnSs

! vs:

…3:1†

H1t:C ˆ Dt‡ItnSt 0 0 IpÿtnSt

!

in the model (2.1). Since the elements of m in (2.1) are free parameters, for testing the hypothesis (3.1) we may consider the LR test formed by only the density of Y. The model for Y is

Y@Nnmp…‰AX;~ 0Š;CnIn†;

…3:2†

where nˆNÿr. Let L…X;C† be the likelihood function of Y. It is easy to see that the MLE of X under H0s or H1t is given by X^ˆ …A~0A†~ÿ1A~0Y…123†. Then we have

g…C† ˆ ÿ2 logL…X;^ C†

ˆnlogjCj ‡trCÿ1‰Y…123†ÿA~X^Y4Š0‰Y…123†ÿA~X^Y4Š:

As is seen later on, the minimum of g…C† under H0s or H1t is compli- cated. For simplicity, we consider the LR test for a modi®ed hypothesis

H~0s:Cˆ C11 0 0 IpÿsnSs

vs: H~1t :C ˆ C…12†…12† 0 0 IpÿtnSt

; …3:3†

where C11 and C…12†…12† are assumed to be arbitrary msms and mtmt positive de®nite matrices respectively, and

C…12†…12†ˆ C11 C12 C21 C22

:

We note that the di¨erence between H0s and H~0s is whether or not C11 satis®es a restriction that C11VIsnSs, and so is the di¨erence between H1t and H~1t. It is easily seen that

minH~0s

g…C11;Ss† ˆnlog1 nS11

…3:4†

‡n…pÿs†log 1 n…pÿs†

Xqÿs

iˆ1

S…23†…23†…ii† ‡Xpÿq

jˆ1

Y4…0Y4…

!

‡nmp

(5)

and

minH~1t

g…C…12†…12†;St† ˆnlog1 nS…12†…12†

…3:5†

‡n…pÿt†log 1 n…pÿt†

Xqÿt

iˆ1

S33…ii†‡Xpÿq

jˆ1

Y4…0Y4…

!

‡nmp;

where

SˆY0‰InÿA…~A~0A†~ÿ1A~0ŠYˆ

S11 S12 S13 S14 S21 S22 S23 S24 S31 S32 S33 S34

S41 S42 S43 S44 0

BB BB

@

1 CC CC A;

S…12†…12†ˆ S11 S12

S21 S22

!

and Saa…ii†ˆYa…i†0

‰InÿA…~A~0A†~ÿ1A~0ŠYa…i†. The minimum (3.4) is achieved at C11ˆ1

nS11; Ssˆ 1 n…pÿs†

Xqÿs

iˆ1

S…ii†…23†…23†‡Xpÿq

jˆ1

Y4…0Y4…

! : Therefore, we can obtain the LR test statistic

L~s;tˆ

jS…12†…12†j 1 pÿt

Xqÿt

iˆ1

S33…ii†‡Xpÿq

jˆ1

Y4…0Y4…

!

pÿt

jS11j 1 pÿs

Xqÿs

iˆ1

S…23†…23†…ii† ‡Xpÿq

jˆ1

Y4…0Y4…

!

…3:6† pÿs

for testing H~0s vs. H~1t, which may be also used for testing H0s vs. H1t. The statistic L~s;t can be expressed in terms of the original observations, using …3:7†

Y4…0Y4…ˆQ…4VxxcQ…40; Saa ˆQaVxxcaQa0; Saa…ii†ˆQ…i†a VxxcaQ…i†a 0; where VxxcˆVxxÿVxcVccÿ1Vcx, VxxcaˆVxxcÿVxacVaacÿ1Vaxc and

Vˆ ‰X;C;AŠ0‰X;C;AŠ ˆ

Vxx Vxc Vxa Vcx Vcc Vca Vax Vac Vaa 0

@

1 A: …3:8†

(6)

We can decompose the statistic L~s;t as L~s;tˆL~1L~2; …3:9†

where

L~1ˆjS112j

jS11j ˆ jS112j jS112‡S12Sÿ122S21j …3:10†

and

L~2ˆ

jS22j 1 pÿt

Xqÿt

iˆ1

S…ii†33 ‡Xpÿq

jˆ1

Y4…0Y4…

!

pÿt

1 pÿs

Xtÿs

iˆ1

S…ii†22 ‡Xqÿt

iˆ1

S33…ii†‡Xpÿq

jˆ1

Y4…0Y4…

!

pÿs: …3:11†

The statistics L~1 andL~2 are the LR statistics for C12ˆ0 and C22ˆItÿsnSs, respectively.

Lemma 3.1. When the hypothesis H0s is true, it holds that ( i ) L~1 and L~2 are independent,

…ii† E…L~1h† ˆGms…12fnÿkÿm…tÿs†g ‡h†Gms…12…nÿk††

Gms…12fnÿkÿm…tÿs†g†Gms…12…nÿk† ‡h†; …iii† E…L~2h† ˆ…pÿs†pÿs†h

…pÿt†pÿt†h

Gm…tÿs†…12…nÿk† ‡h†

Gm…tÿs†…12…nÿk††

Gm…12fn…pÿt† ÿk…qÿt†g ‡ …pÿt†h†Gm…12fn…pÿs† ÿk…qÿs†g†

Gm…12fn…pÿt† ÿk…qÿt†g†Gm…12fn…pÿs† ÿk…qÿs†g ‡ …pÿs†h†; where Gm…n=2† ˆpm…mÿ1†=4Qm

jˆ1G……nÿj‡1†=2†.

Proof. UnderH0s, it is easy to verify thatS112@Wms…nÿkÿm…tÿs†;C11†, S12S22ÿ1S21@Wms…m…tÿs†;C11†, S22@Wm…tÿs†…nÿk;ItÿsnSs†, Pqÿt

iˆ1S33…ii†@ Wm……nÿk†…qÿt†;Ss† and Ppÿq

jˆ1 Y4…0Y4…@Wm…n…pÿq†;Ss†. Further, these statistics are independent. Therefore, L~1 and L~2 are independent. The h-th moment of L~1 follows from that L~1 is distributed as a lambda distribution Lms…m…tÿs†;nÿkÿm…tÿs††. The h-th moment of L~2 can be written as

E…L~2h† ˆ2m…tÿs†h…pÿs†m…pÿs†h …pÿt†m…pÿt†h

Gm…tÿs†…12…2h‡nÿk††

Gm…tÿs†…12…nÿk†† E jW2j…pÿt†h jW1‡W2j…pÿs†h

" #

; where W1 and W2 are independently distributed, W1@Wm…n1;Im†, W2@ Wm…n2;Im†, n1ˆ …2h‡nÿk†…tÿs†, n2 ˆn…pÿt† ÿk…qÿt†. Here, letting Uˆ jW1‡W2j and V ˆ jW2j jW1‡W2jÿ1, it is easy to verify that U and V

(7)

are independent,V@Lm…n1;n2†, Uhas the same distribution as Qm

iˆ1Ui where the Ui are independent and Ui@wn21‡n2ÿi‡1. The h-th moment of L~2 can be obtained from the above fact.

Using Lemma 3.1, we can obtain an asymptotic expansion of the null distribution of statistic ÿnrlogL~s;t by expanding its characteristic function.

Theorem3.1. When the hypothesis H0s is true, an asymptotic expansion of the distribution function of statistic ÿnrlogL~s;t is

P…ÿnrlogL~s;tUx† ˆP…w2f Ux† ‡O…Mÿ2† …3:12†

for large Mˆnr, where f ˆ12m…tÿs†…mt‡ms‡1† and r is de®ned by fn…1ÿr† ˆ 1

12m…tÿs†f6…mt‡ms‡1†k‡6…mt‡1†ms

‡2m2…tÿs†2‡3m…tÿs† ÿ1‡ 1 …pÿt†…pÿs†

f6…pÿq†2k2ÿ6…m‡1†…pÿq†k‡2m2‡3mÿ1gg:

In the single-response case…mˆ1†, the asymptotic expansion (3.12) agrees with the results in Yokoyama [12].

We now consider the exact LR criterion Ls;tn=2 for H0s vs. H1t. Let C^11ˆ1

nS11; S^sˆ 1 n…pÿs†

Xqÿs

iˆ1

S…ii†…23†…23†‡Xpÿq

jˆ1

Y4…0Y4…

!

; C^…12†…12†ˆ1

nS…12†…12†; S^tˆ 1 n…pÿt†

Xqÿt

iˆ1

S33…ii†‡Xpÿq

jˆ1

Y4…0Y4…

! : If it holds that

C^11ÿIsnS^sV0 and C^…12†…12†ÿItnS^tV0;

…3:13†

it is easy to show that Ls;tˆL~s;t. Unless (3.13) holds, we need to solve the problem of minimizing

1

ng…Ds;Ss† ˆlogjDs‡IsnSsj ‡tr…Ds‡IsnSs†ÿ1C^11

‡ …pÿs†…logjSsj ‡trSÿ1s S^s†

or 1

ng…Dt;St† ˆlogjDt‡ItnStj ‡tr…Dt‡ItnSt†ÿ1C^…12†…12†

‡ …pÿt†…logjStj ‡trSÿ1t S^t†

(8)

under H0s or H1t, respectively. However, since the problem is not easy, we consider a lower bound denoted in terms of characteristic roots (Anderson [1]) for the minimum of g…Ds;Ss†=n or g…Dt;St†=n. Let l1V Vlms and l1V Vlm be the characteristic roots of C^11 and S^s respectively, and let d1> >dmt and d1 > >dm be ones of C^…12†…12† and S^t, respectively.

Then, as test statistics for H0s vs. H1t, we obtain Ls;tˆ L~s;t; if C^11ÿIsnS^sV0,

Rs; elsewhere,

…3:14†

and

Ls;t ˆ L~s;t; if C^…12†…12†ÿItnS^tV0, Rt; elsewhere,

…3:15†

where

Rsˆ jC^…12†…12†j jS^tjpÿt jC^11j Qm

jˆ1ljsexp lj ljsÿ1

( )pÿs;

Rtˆ

jC^…12†…12†j Qm

jˆ1djtexp dj djtÿ1

( )pÿt

jC^11j jS^sjpÿs :

The statistics Ls;t and Ls;t are approximate LR statistics for H0s vs. H~1t and H~0s vs. H1t, respectively. In the single-response case …mˆ1†, we have Ls;tULs;tULs;t .

4. The MLE's of unknown mean parameters

In this section we obtain the MLE's of unknown mean parameters in the multivariate growth curve model (1.1) withWˆ …Bs0nIm†Ds…BsnIm† ‡IpnSs …ˆWs†and consider the e½ciency of the MLE ofX. This model is reduced to the same canonical form as in (2.1), but the covariance matrix C is given by

C ˆ Ds‡IsnSs 0 0 IpÿsnSs

: It is easily seen that the MLE's of m and X are given by

^

mˆZ and X^ˆ …A~0A†~ÿ1A~0Y…123†; …4:1†

respectively. Therefore, from (2.2) the MLE of Y is given by

(9)

Y^ˆ …H10ÿ1ZQÿ …H10ÿ1H10A…A~0A†~ÿ1A~0Y…123†…BnIm†:

…4:2†

We now express the MLE's X^ and Y^ in terms of the original observations or the matrix V in (3.8). Noting that

…A~0A†~ÿ1A~0Y…123†ˆ …A0H2H20ÿ1A0H2H20X…B0nIm†;

…H10ÿ1H10ˆ …C0ÿ1C0; we have the following theorem.

Theorem 4.1. The MLE's of X and Y in the multivariate growth curve model (1.1) with WˆWs are given as follows:

X^ˆ ‰A0…INÿPC†AŠÿ1A0…INÿPC†X…B0nIm†

ˆVaacÿ1Vaxc…B0nIm†;

Y^ˆ …C0ÿ1C0Xÿ …C0ÿ1C0A‰A0…INÿPC†AŠÿ1A0…INÿPC†X…B0BnIm†

ˆVccÿ1‰VcxÿVcaVaacÿ1Vaxc…B0BnIm†Š;

where PCˆC…C0ÿ1C0.

On the other hand, the MLE of X when W has no structures, i.e., is arbitrary positive de®nite is given by

X~ˆ …A~0A†~ÿ1A~0H20XSÿ1…B0nIm†‰…BnIm†Sÿ1…B0nIm†Šÿ1; …4:3†

where SˆX0H2‰InÿA…~A~0A†~ÿ1A~0ŠH20X. The result (4.3) is an extension of Chinchilli and Elswick [2] to a multivariate case. It is easily seen that

A~0A~ˆVaac; A~0H20XˆVaxc; SˆVxxcÿVxacVaacÿ1VaxcˆVxxac: These imply that

X~ˆVaacÿ1VaxcVxxacÿ1 …B0nIm†‰…BnIm†Vxxacÿ1 …B0nIm†Šÿ1: …4:4†

The estimators X^ and X~ have the following properties.

Theorem 4.2. In the multivariate growth curve model(1.1) with WˆWs it holds that both the estimators X^ and X~ are unbiased, and

V…vec…X†† ˆ^ CsnM;

V…vec…X†† ˆ~ 1‡ m…pÿq†

Nÿ …k‡r† ÿm…pÿq† ÿ1

CsnM;

where M ˆ ‰A0…INÿPC†AŠÿ1 and

(10)

Csˆ Ds‡IsnSs 0 0 IqÿsnSs

: Proof. Since X^ˆ …A~0A†~ÿ1A~0Y…123†, we have

E…X† ˆ^ X and V…vec…X†† ˆ^ Csn…A~0A†~ÿ1;

which imply the result onX. By an argument similar to the one in Yokoyama^ [12], it can be shown that for any positive de®nite covariance matrix W, E…X† ˆ~ X and

V…vec…X†† ˆ~ 1‡ m…pÿq†

Nÿ …k‡r† ÿm…pÿq† ÿ1

‰…BnIm†Wÿ1…B0nIm†Šÿ1nM:

Under the assumption that WˆWs, it holds that ‰…BnIm†Wÿ1…B0nIm†Šÿ1ˆ Cs, which proves the desired result.

From Theorem 4.2, we obtain

V…vec…X†† ÿ~ V…vec…X†† ˆ^ m…pÿq†

Nÿ …k‡r† ÿm…pÿq† ÿ1CsnM>0;

…4:5†

which implies that X^ is more e½cient than X~ in the model (1.1) with WˆWs. This shows that we can get a more e½cient estimator for X by assuming multivariate random-e¨ects covariance structures. Especially, when p is large relative to N, we can obtain greater gains.

As mentioned in § 3, the MLE's of unknown variance parameters Ss and Ds in the model (1.1) with WˆWs become very complicated. On the other hand, the usual unbiased estimators of Ss and Ds may be de®ned by

S~sˆ 1

n…pÿs† ÿk…qÿs†

Xqÿs

iˆ1

S…ii†…23†…23†‡Xpÿq

jˆ1

Y4…0Y4…

!

and D~sˆ 1

nÿkS11ÿIsnS~s;

respectively. However, there is the possibility that the use of D~s can lead to a nonpositive semi-de®nite estimate of Ds. These estimators can be expressed in terms of the original observations, again using (3.7).

5. An example

In this section we apply the results of § 4 to the data (see, e.g., Srivastava and Carter [8, p. 227]) of the price indices of hand soaps packaged in four ways, estimated by twelve consumers. Each consumer belongs to one of two groups. It is known (Yokoyama [11]) that the model (1.1) in the case mˆ1,

(11)

pˆ4 and Nˆ12 with E…X† ˆ 16

0 !

x140‡112…y1;y2;y3;y4† and V…vec…X†† ˆ …d214140‡s2I4†nI12

is adequate to the observation matrix X :124. Now we estimate how much gains can be obtained for the maximum likelihood estimation of x by assuming the random-e¨ects covariance structure. Since kˆqˆrˆsˆ1,

‰140…d214140‡s2I4†ÿ114Šÿ1ˆ …4d2‡s2†=4, Mˆ1=3, d^2ˆ:01353 and s^2ˆ

:00976, it follows from Theorem 4.2 and (4.5) that V…x†=V…^ x† ˆ~ 2=3 and

V…^ x† ÿ~ V…^ x† ˆ …4^ d^2‡s^2†=24ˆ:00266.

Acknowledgements

The author would like to thank the referee and Professor Y. Fujikoshi, Hiroshima University, for many helpful comments and discussions.

References

[ 1 ] T. W. Anderson, Asymptotic theory for principal component analysis, Ann. Math. Statist.

34 (1963), 122±148.

[ 2 ] V. M. Chinchilli and R. K. Elswick, A mixture of the MANOVA and GMANOVA models, Commun. Statist.-Theor. Meth. 14 (12), (1985), 3075±3089.

[ 3 ] A. M. Kshirsagar and W. B. Smith, Growth Curves, New York: Dekker, 1995.

[ 4 ] N. Lange and N. M. Laird, The e¨ect of covariance structure on variance estimation in balanced growth-curve models with random parameters, J. Amer. Statist. Assoc.84(1989), 241±247.

[ 5 ] R. F. Pottho¨ and S. N. Roy, A generalized multivariate analysis of variance model useful especially for growth curve problems, Biometrika 51 (1964), 313±326.

[ 6 ] G. Reinsel, Estimation and prediction in a multivariate random e¨ects generalized linear model, J. Amer. Statist. Assoc. 79 (1984), 406±414.

[ 7 ] D. von Rosen, The growth curve model: a review, Commun. Statist.-Theor. Meth.20(9), (1991), 2791±2822.

[ 8 ] M. S. Srivastava and E. M. Carter, An Introduction to Applied Multivariate Statistics, New York: North-Holland, 1983.

[ 9 ] A. P. Verbyla and W. N. Venables, An extension of the growth curve model, Biometrika 75 (1988), 129±138.

[10] T. Yokoyama and Y. Fujikoshi, Tests for random-e¨ects covariance structures in the growth curve model with covariates, Hiroshima Math. J. 22 (1992), 195±202.

[11] T. Yokoyama, LR test for random-e¨ects covariance structure in a parallel pro®le model, Technical Report, 93-3, Statistical Research Group, Hiroshima University, 1993.

[12] T. Yokoyama, Statistical inference on some mixed MANOVA-GMANOVA models with random e¨ects, Hiroshima Math. J. 25 (1995), 441±474.

(12)

[13] T. Yokoyama, Tests for a family of random-e¨ects covariance structures in a multivariate growth curve model, J. Statist. Plann. Inference 65 (1997), 281±292.

Department of Mathematics and Informatics Faculty of Education

Tokyo Gakugei University Tokyo 184-8501, Japan

Referensi

Dokumen terkait