• Tidak ada hasil yang ditemukan

Directory UMM :Data Elmu:jurnal:J-a:Journal Of Economic Dynamics And Control:Vol24.Issue4.Apr2000:

N/A
N/A
Protected

Academic year: 2017

Membagikan "Directory UMM :Data Elmu:jurnal:J-a:Journal Of Economic Dynamics And Control:Vol24.Issue4.Apr2000:"

Copied!
25
0
0

Teks penuh

(1)

24 (2000) 535}559

On the solution of the linear rational

expectations model with multiple lags

Hugo Kruiniger

University College London, Department of Economics, Gower Street, London WC1E 6BT, UK

Abstract

In this paper the symmetric linear rational expectations model from Kollintzas (1985) is generalized by allowing for multiple lags. By using a convenient decomposition of the matrix lag polynomial of the Euler}Lagrange equations that encompasses that for the Kollintzas model, it is shown that the model admits a unique stable solution. The stability condition derived by Kollintzas turns out to be valid in the more general setting. Moreover a procedure is given to obtain a particularly useful representation of the solution that is as close as possible to a closed form representation. ( 2000 Elsevier Science B.V. All rights reserved.

JEL classixcation: C61; C62; D92

Keywords: Rational expectations; Closed form solution; Unique solution; Time-to-build;

Stability condition

1. Introduction

The modern macroeconomic literature includes many models which involve the solution to a stochastic dynamic optimization problem. The objective functions in these problems are often approximated by linear}quadratic speci-"cations (e.g. see Christiano, 1990). There are now several methods available for solving such models. Generally, they involve the factorization of the spectral density-like matrix lag polynomial which appears in the"rst order conditions,

E-mail address:[email protected] (H. Kruiniger)

(2)

the so-called Euler}Lagrange conditions (ELC). Hansen and Sargent (1981) outlined a relatively fast and revealing solution procedure for multivariate linear rational expectations (LRE) models. Although their procedures are applicable to a wide class of linear}quadratic models, in particular models with dynamic interrelation and multiple lags, their approach has some disadvantages. The main drawback of their procedures is that the solutions are expressed in terms of parameters from which the structural parameters cannot be uniquely identi"ed. Several other methods have been suggested in the literature for the estimation and solution of LRE models, that do not su!er from this drawback. Kollintzas (1985) proposes a solution method for a class of multivariate LRE models that satisfy a symmetry condition. When a model has this property, it is possible to obtain an attractive representation of the forward solution of the model by diagonalizing the matrices of the lag polynomial of the ELC simultaneously. Epstein and Yatchew (1985) outlined a simpli"ed solution and estimation procedure for a sub-class of symmetric LRE models that involves a reparametr-ization of the model and yields a closed form solution. This procedure was extended by Madan and Prucha (1989) in order to estimate more general, possibly nonsymmetric models. Although all these procedures allow for identi-"cation of the structural parameters, they have been developed for more restrict-ive LRE models than the Hansen and Sargent procedures, i.e. for models that consist of second order di!erence equations.

In this paper we derive a convenient representation of the solution to a gener-alized version of Kollintzas' model which allows for multiple lags in the transition equations of the endogenous state variables. This wider framework encompasses a model with both adjustment costs and a general time-to-build structure as in Kydland and Prescott (1982) as well as other more complicated dynamic models. The nonsymmetric models studied by Madan and Prucha (1989) and Cassing and Kollintzas (1991) only allowed for time-to-build lags of one period. This is quite restrictive since it prevents one to study variables that have di!erent lag structures at the same time or to model the value-put-in-place pattern of the investment process.

(3)

the stability condition to the &equivalence' restrictions mentioned above, all revision processes can be written as unique functions of the exogenous pro-cesses. The &stability' restrictions on the revision processes are such that after substitution of these functions in the reduced form, the nonstable roots of the matrix lag polynomials cancel out. A lemma provides a procedure for imposing the stability condition on the solutions, that is, for"nding the factorization of the matrix lag polynomial of the exogenous part of the reduced form that includes the factor with the nonstable roots that has to cancel out.

The paper is organized as follows. In the next section the model is stated. Section 2 also gives other examples of LRE models that feature multiple lags. The third section contains the main results of this paper. Section 4 concludes.

2. The model

In this section the symmetric linear rational expectations model with multiple lags is described using an example with time-to-build. The choice of this example is motivated by the fact that it is not restrictive and empirically relevant: Rossi (1988), Johnson (1994), Oliner et al. (1995), and Peeters (1995) all"nd evidence favoring factor demand equations with time-to-build.

Consider a (representative) agent who maximizes the expected discounted stream of cash#ows given technological constraints and information on the economic environment. As far as the agent is concerned, this environment only consists of product and factor markets. He is a price taker in both markets. Following Kollintzas (1985), the gross cash#ows are given by

g(x

t,*xt`1,kt)"a@xt#k@txt!12xt@Ext!x@tH*xt`1!12*x@t`1G*xt`1, (2.1)

where x

t is a (F]1) vector of stocks of production factors, decided upon at

equidistant points in time. The (F]F) constant matrices E, G and H are symmetric, E is positive (semi-)de"nite (symmetric) (PSDS) and Gis PDS; in addition [EH

{ HG] is PSDS and G}H is nonsingular; a is a (F]1) vector of

constants, whilek

tis a (F]1) vector of exogenous technology shocks.

Further-more, letI

f,t, f"1,2,F, be the gross changes of the stocks of the inputs (e.g.

gross investments), let p

f,t be the corresponding real factor prices, which are

assumed to be exogenous as well, and letcdenote the real discount factor. Then the agent chooses a contingency plan forMx

tNthat maximizes

lim

T?=

E

0

T

+

t/0

ct

C

g(x

t,*xt`1,kt)! F

+

f/1

p

f,tIf,t

D

(2.2)

subject to an initial condition, e.g. x

0"x6, a terminal condition and the

(4)

In the generalized LRE model, the transition equations allow for multiple lags. Kydland and Prescott (1982) stressed that it takes time to build new capital (see Montgomery (1995) for some descriptive statistics) and suggested a speci-"cation for investment that takes this feature into account. Lets

f,j,tbe the total

size of an investment project of typef,jperiods from completion,j"0,2,hf,

wherehfis the gestation lag. Then the building process for inputs of typefcan be formalized as

x

f,t`1"(1!df)xf,t#sf,0,t`1, (2.3a)

s

f,j,t`1"sf,j`1,t, j"0,2,hf!1, (2.3b)

whered

fis the depreciation rate of inputs of typef. Furthermore, lettf,jbe the

share of the resources spent on investment projects that are completed in jperiods. Total investment of typefin periodtequals

I

f,t"

hf

+

j/0

tf,js

f,j,t with

hf

+

j/0

tf,j"1. (2.3c)

From combining Eqs. (2.3a)}(2.3c), it follows that

I

f,t"h

f`1

+

j/0 /

f,jxf,t`j~1, f"1,2,F, (2.4)

where the/

f,j's depend on thetf,j's anddf.

There are other examples which"t the framework described above. One can think of a"rm which builds tunnels and bridges. Here the time-to-build lags do not pertain to the inputs but to the outputs. Moreover, as these projects require speci"c capital and skilled labour, the"rm tries to minimize the#uctuations in the total size of its projects as well as in the composition of the portfolio of projects. Another example is investment in a foreign country when pro"ts can only be repatriated after some time.

3. The existence,uniqueness and representation of the solution

Maximization of Eq. (2.2) subject to Eqs. (2.1) and (2.4) implies the following "rst order conditions (ELC) for the stocks:

E

t~hf

G

[Gf.!Hf.]xt`1!

C

Ef.#

(1#c)

c Gf.!2Hf.Dxt

#1

c[Gf.!Hf.]xt~1H#af#Et~hfkf,t

!hf+`1

j/0 /

(5)

where M

f. is generic notation for the fth row of matrix M. Because of the

gestation lags, the decisions concerningx

1,t`h1,x2,t`h2,2,xF,t`hF are made at

timet.

De"ning¸as the lag operator1we can write system (3.1) succinctly as

E

t¸~JP(¸)yt#ut"0 (3.2)

with y

t"[x1,t`h1,x2,t`h2,2,xF,t`hF]@

J"h1!hF#1 and assumingh15h2525hF50,

u

t"

a1#E

tk1,t`h1!c~h1 h1`1

+

j/0 u

1,h1~j`1cjEt[p1,t`j]

a2#E

tk2,t`h2!c~h2 h2`1

+

j/0 u

2,h2~j`1cjEt[p2,t`j]

F F

a

F#EtkF,t`hF!c~hF

hF`1

+

j/0 u

F,hF~j`1cjEt[pF,t`j]

"d

t#;(¸)vt,

where;(¸)";

0#;1¸1#2#;q¸q,vtis a white noise process anddtis the

deterministic part ofu

t,2

P(j)" +J

i/~J

A

ijJ~i. (3.3)

In the de"nition of theA

i's we use the de"nitions

C"[c

jk],D"[djk] (F]F)

with

c

jk,ejk#

(1#c)

c gjk!2hjk, djk,gjk!hjk, j,k"1, 2,2,F

A

i"[ai,jk] (F]F), !J4i4J have nonzero elements3

a

hj~hk`1,jk"djk, ahj~hk,jk"!cjk, ahj~hk~1,jk"1cdjk.

1The inverted lag operator¸~1is the forward operator.

2For the moment we will assume thatu

thas a"nite VMA representation of orderq. A more

general VARIMA model foru

tcan be handled as well, as we will see at the end of this section.

(6)

Introducing the vectors of revision processes ejt"E

t(yt`j)!Et~1(yt`j),

j"0,2,h1, which are martingale di!erences, we obtain by replacing the expec-tations in Eq. (3.2) with realizations and revisions and after shifting the system Jperiods back in time (see Appendix A)

P(¸)y

t" J~1

+

i/0

J

+

j/i`1

A

j¸J~j`ieit!ut~J. (3.4)

The ewective (not multiplied by zero) revision processes in Eq. (3.4) are

e01,e02,2,eh1~h2

2 ,2,e0F,2,ehF1~hFwhere the subscript now indicates a production

factor. Thus we have (h1#1)F!+Fi/1h

irevision processes.4They are subject to

constraints as we will show in the sequence.P(j) can also be represented as

P(j)"[jMijq

ij(j)]

withq

ij(j)"dij!cijj#(dij/c)j2,i,j"1, 2 ,2,Fand

M"

C

J!1 J#h

2!h1!1 2 J#hF!h1!1

J#h

1!h2!1 J!1 2 J#hF!h2!1

F F F

J#h1!h

F!1 J#h2!hF!1 2 J!1

D

.

Then it is easily seen thatP(j) can be decomposed as5

P(j)"jJ~1P~1(j)Q(j)P(j), (3.5)

whereP(j)"diag(jh1,jh2,2,jhF) Q(j)"[q

ij(j)]"(G!H)!

A

E#

(1#c)

c G!2H

B

j#

1

c(G!H)j2

"S@f(I#(f~1)j#(I/c)j2)S, (3.6)

wherefis a diagonal matrix with the eigenvalues of!(C~1)DandS~1is the matrix with the corresponding eigenvectors.

Eq. (3.4) will now be rewritten in a form that is suitable for deriving con-straints on the revision processes (for derivation see also Appendix A)

P(¸)y

t"P(¸)

G

J~1

+

i/0

eit~

i

H

#mt~J!ut~J, (3.7)

4Hereafter the subscript of the revision processes will refer to a production factor or indicate time.

(7)

where

m

t~J"! J~1

+

j/0

J

+

h/1

A

~h¸J`h`jejt! J~1

+

j/0

j

+

h/0

A

h¸J~h`jejt.

Using the fact that+J~1

i/0eit~i"yt!Et~Jytwe obtain

P(¸)E

t~Jyt"mt~J!ut~J. (3.8)

ReplacingP(¸) in the recursive equation (3.8) with its decomposition (3.5) and premultiplying both sides by¸~h1P(¸) yields

¸~hFQ(¸)P(¸)E

t~Jyt"¸~h1P(¸)Mmt~J!ut~JN. (3.9)

The premultiplication shifts the equations forward in time. The LHS of the equation above contains only current and lagged values of y. Note that P(¸)y

t"xt. Ifh1'0 the components of the second term at the RHS are based

on information after timet!J. This observation is at the root of the following lemma.

Lemma 1. In order thaty"My

tNis a solution of Eq. (3.2),the revision processesejt

have to satisfy the constraints

EM¸~h1P(¸)[mt~J!u

t~J]DXt~iN

"EM¸~h1P(¸)[mt~J!u

t~J]DXt~i~1N, i"0,2,J!1. (3.10)

Proof. Take expectations of both sides of Eq. (3.9) with respect to the

informa-tion sets X

t~i, i"0,2,J, and calculate the di!erence between two adjacent

expressions.6See also Broze et al. (1995). h

The dimension of the set of solutions that satisfy Eq. (3.1) depends on the number of free revision processes. The number of e!ective (univariate) martin-gale di!erences in Eq. (3.4) equals (h1#1)F!+Fi/1hi. After imposing the constraints in Eq. (3.9), there are onlyFfree revision processes left as we will prove now.

Lemma 2. The restrictions in Eq. (3.10) reduce the number of free revision

pro-cesses in Eq. (3.4)byh1F!+Fi/1h

i.7

6Sincem

t~Jdepends only on information up tot!J, in some cases the equalities in (3.10) hold

trivially and do not imply restrictions on the revision processes. For example, subtracting the expectation of the"rst equation in (3.7) with respect toX

t~i~1from the expectation of the same

equation with respect toX

t~ifori"0,2,h1!hFdoes not yield a single constraint.

(8)

Proof. See Appendix B.

So all the revision processes in Eq. (3.4) butFare subject to restrictions. The space of candidate solutions to the optimal control problem that satisfy the Euler}Lagrange conditions (3.2) will be reduced even further by imposing the transversality condition or a slightly stronger condition like a stability condi-tion. Such conditions guarantee uniqueness of the solution just as they do when there is no time-to-build.8In the latter caseP(j)"Q(j).

Recall the decompositions in Eqs. (3.5) and (3.6). Then premultiplying both sides of Eq. (3.4) by (S@f)~1¸~h1P(¸) leads to

(I#(f~1)¸#(I/c)¸2)S¸~hFP(¸)y

t

~hF(S@f)~1

G

J+~1

i/0

J

+

j/i`1

P(¸)A

j¸1~j`ieit!P(¸)ut~1H. (3.11)

By virtue of Eq. (3.10), the number of free revision processes in Eq. (3.11) can be reduced toF, saye0t.9If we look for a solutiony

tthat belongs to the class of

VARIMA-processes, we can parameterizee0t asU0v

t. Likewise, we can writeeitas

Uiv

t, i"1,2,J!1, where theUiareF]Fmatrices. Thus the RHS of Eq. (3.11)

is a distributed lag ofv

t, R(¸)vt"(R0#R1¸#2#Rg¸g)vtwithg"J#q.

At this point let us introduce some notation for later. We denote the vectors comprising theewectiverevision processes bye6it. Further, we implicitly de"ne the matricesUMibye6it,UM iv

t. Now, theUMi's are subject to restrictions like Eq. (3.10).

On the other hand, we note that knowledge of the rows of theUicorresponding to the other revision processes in theeitis not required because they cancel out in Eq. (3.11). Finally, we de"ne the matrixUM ,[(UM 0)@2(UMJ~1)@]@and the vector

e6,[(e60)@2(e6J~1)@]@.

A solution of the stochastic optimal control problem must also satisfy the transversality condition. Su$cient conditions for this condition to hold are that the exogenous stochastic processv

tand a solutionytare of exponential order

less thanc~1@2(see Sargent, 1987, pp. 200}201). Theu

tprocess has this property

by assumption. To verify whethery

thas this property, we investigate the LHS of

Eq. (3.11). Let us de"neQK(j),(I#(f~1)j#(I/c)j2). Denote the roots of thefth equation inQK(1/j)"0 byj1fandj2f. All pairs of rootsMj1f,j2fN,f"1,2,F, satisfyj1fj2f"c~1. Letj1fbe the smaller modulus root of each pair. Then we haveDj1fD4c~1@2andDj2fD5c~1@2. The stability condition given in Kollintzas

8See Kollintzas (1985).

(9)

(1985) is necessary and su$cient for Dj1fD(c~1@2 to hold. For instance, this condition is satis"ed when the adjustment costs are strongly separable (H"0). Notice that ifDj1fD(c~1@2, thenj1f,j2f3R. But in order that the solutiony

tis

of exponential order less thenc~1@2, we also need thatR(¸) can be factorized as (I!K`¸)RM (¸), whereK`"diag(j21,2,j2F) and RM(¸) is a lag polynomial of

orderg!1, for otherwise we are not able to get rid of that part of the LHS of Eq. (3.11) that causes violation of the condition, i.e. the factor (I!K`¸).

The lemma below provides a condition that is equivalent to the existence of the factorization but more easily veri"able.

Lemma 3. Let N(j)"N

0#N1j#2#Nwjwbe a polynomial of order w.This

polynomial can be factorized as(I!Cj)NM (j)if and only if

w

+

i/0 Cw~iN

i"0. (3.12)

Proof. See Appendix C.

SubstitutingR(¸) forN(j) andK`forCgives the condition

g

+

i/0

(K`)g~iR

i"0. (3.13)

The matricesR

0,2,Rgdepend onF]Funknown entries ofU0. But Eq. (3.13)

imposes F]Frestrictions on the U0

ij's. In general, these restrictions uniquely

identifyU0. To see this consider the RHS of Eq. (3.11) apart from the last term

¸~hF(S@f)~1J+~1

i/0

J

+

j/i`1

P(¸)A

j¸1~j`ieit,(RK 0#RK 1¸#2#RKJ~1¸J~1)et,

(3.14)

where e

t"(e0t2etJ~1)@, RKi,(cS@f)~1[Ki12KiJ], i"0,2,J!1; the RKi's are

F](J]F) matrices and the K

ij's are F]F matrices with Kij"0, j'i#1,

K

ij"+fkM/1(i)AMJ`j~i~1,k, 14j4i#1,10 where

AMJ,1"

C

0 2 0 cd

1F

0 2 0 0

F F F

0 2 0 0

D

, AMJ~1,1"

C

0 2 0 !c61F

0 2 0 0

F F F

0 2 0 0

D

,

10AM

(10)

AMJ~2,1"

C

0 2 0 d

1F

0 2 0 0

F F F

0 2 0 0

D

,

AMJ~3,1"0,2,AMh1~hF~1`1,1"

C

0 2 0 cd1F~1 0

0 2 0 0 0

F F F F

0 2 0 0 0

D

,

AM

h1~hF~1,1"

C

0 2 0 !c61F~1 0

0 2 0 0 0

F F F F

0 2 0 0 0

D

,2,

AM1,1"

C

cd

11 0 2 0

0 0 2 0

F F F

0 0 2 0

D

,

AM

J,2"

C

0 2 0 0

0 2 0 cd2F

0 2 0 0

F F F

0 2 0 0

D

, AM

J~1,2"

C

0 2 0 0

0 2 0 !c62F

0 2 0 0

F F F

0 2 0 0

D

,2,

AMh1~h2`1, 2"

C

0 0 0 2 0

0 cd

22 0 2 0

0 0 0 2 0

F F F F

0 0 0 2 0

D

,2,

AMJ,F"

C

0 2 0 0

F F F

0 2 0 0

0 2 0 cd

FF

D

with c6ij,cc

(11)

fM (i)"f for hf`1!hF(i4hf!hF with f"1,2,F, i"0,2,J!1 and

Then condition (3.13) can be rewritten in the form

J~1

matrix with theith row replaced by;

s,i..

Combining the restrictions on the revision processes in (3.10) and (3.16) gives

C

A0 A1 A2 2 AJ~2 AJ~1

We can write the system in (3.17) more compactly as

=e

t#Zvt"0. (3.18)

where=andZare de"ned implicitly.

We note that, as mentioned in the proof of Lemma 2, !h

FF#+Fi/1hi

&restrictions'in the system are false but they are included for convenience: it is easier to show the independence of the true restrictions by looking at the extended system as will become clear below. Furthermore in the true restrictions the revision processes in Eq. (3.17) that are not mentioned below Eq. (3.4) are multiplied by zeros and cancel out. For the lastF rows this follows from the observation that Eq. (3.16) is obtained from Eq. (3.14) by "rst decomposing theA

jin terms, then premultiplying each term byKraised to some power, where

(12)

horizons) are multiplied by the same rows ofA

jas in Eq. (3.14). The latter is

obtained by shifting the rows in Eq. (3.4) back in time and by premultiplying the result by a constant matrix.

We write the system of the true restrictions contained in Eq. (3.18) as

=M e6

t#ZMvt"0. (3.18@)

For uniqueness of the linear solutionMy

tNto Eq. (3.1) it is su$cient that=is

nonsingular. We will demonstrate that in general this condition is satis"ed. In the model without time-to-build="A

1"Gis nonsingular.

Theorem 1. The matrix W dexned in Eqs.(3.17)and(3.18)is generally nonsingular,

that is∀c, M

W,c,ME,G,HDdet(=c)"0N,the space ofvalues of the entries of the

matricesE,G,andH for whichdet(=

c)"0,has Lebesgue measure zero.

Proof of Theorem 1. See Appendix D.

The uniqueness of the solution to the model guarantees the existence of the closed form solution (CFS)

x

t"Kxt~1#S~1RM (¸)¸hFvt (3.19)

whereK"S~1K~S, andK~,diag (j11,2,j1F).

From Eq. (3.19) it is clear that the transversality condition can also be imposed on the solutions of the model with multiple lags.

LetM"I!K. IfDMDO0, then Eq. (3.19) admits the#exible accelerator form

x

t!xt~1"M(xHt!xt~1) withxtH"M~1S~1RM (¸)¸hFvt. (3.20)

Notice that the autoregressive part of the closed form solution is invariant with respect to the order of the gestation lags. Our strategy for deriving expressions of the MA parametersRMiin the CFS in terms of the structural parameters entails the following steps:

1. Obtain expressions fore6

t; from Eq. (3.18@)e6t"!=M ~1ZMvt, i.e.UM"!=M ~1ZM.

2. Substitute these expressions in the RHS of Eq. (3.11) and"nd formulae for R

0up toRg, g"J#q. Make use of Eq. (3.14).

3. Use the procedure in the proof of Lemma 3 to obtain the factorization R(¸)"(1!K`¸)RM(¸), i.e.RM 0,RM 1,2,RM

g~1.

(13)

To estimate the structural parameters in the CFS an iterative procedure can be used which entails the three steps mentioned above but carried out in reverse order. At each iteration one has to invert the (sparse) matrix=M in Eq. (3.18@) to obtainUM. We note that the dimensions of=M (and also=) do not depend onq. Furthermore, one can reduce the computational burden by replacing S and K` in Eq. (3.17) (or Eq. (3.18@)) by estimates based on an initial consistent estimate ofK,11which would result in a system that is linear in the technology parameters and the;

i. Alternatively, one can of course replace theSandK`in

Eq. (3.17) by new estimates at each iteration. So in practice we have found a representation of the solution that is as close as possible to a closed form representation.

The identi"cation of the structural parameters hinges on whether the factor demand equations are estimated in a system together with the production (pro"t) function and (observable) exogenous processes or separately. If the latter are not included in the system, the MA parameters provide no information on the structural parameters. If the production (pro"t) function is not included, one can at best identify the parameters inCandD(up to a constant factor) but not those inE,G, andH.

When one ignores the restrictions on the MA parameters, one still has a semi closed form representation of the solution. Epstein and Yatchew (1985) discuss an exhaustive set of properties of the accelerator matrix,M, that are implied by the structure of the problem. By demanding that the estimates ofMexhibit these properties and by estimating the factor demand equations together with the production function and the price equations, more precise estimates of the structural parameters can be obtained.

At the outset we assumed that u

t&VMA(q). However, prices sometimes

follow more general VARIMA processes. In such cases the formula of the CFS should be amended by using the following procedure. The autoregressive lag polynomial of the exogenous processes Mu

tN can be factorized such that one

factor is a diagonal matrix lag polynomial with common roots on the diagonals: <M(¸). For instance, if the prices are IMA processes this factor includes the unit roots. Then the VARI lag polynomial of the solution becomes<M (¸)(I!K¸). If the remainder ofMu

tNis stationary, then that part can be represented as a VMA.

The stability condition can still be imposed. However the procedure to obtain expressions for the MA parameters in the CFS in terms of the structural

11First, notice that the eigenvalues ofKare equal to the diagonal elements ofK~and recall that

(14)

parameters becomes unfeasible when the order of the MA is high (R). In practice the MA-part of the solution will often be approximated by a VARMA model.12

In the model without adjustment costs, we do not need to impose stability to obtain a unique solution; the restrictions in Lemma 1 are su$cient.

Finally, similar results as those obtained above can also be derived for

nonsymmetric LRE models (HOH@) with multiple lags. As examples of such

models one can think of a model with time-to-build andgrossadjustment costs or a model with time-to-build and net adjustment costs stemming from new investments, i.e. changes in the y's.13 A decomposition of the second order matrix lag polynomial of the nonsymmetric model can be found in Kollintzas (1986). He also gives a stability condition for such models. Again, imposing the stability condition yields a unique solution to the model. However, the deriva-tion of a closed form representaderiva-tion of the soluderiva-tion to the model (especially for the MA-part) is likely to be very complicated as the factorK`in the formulae above has to be replaced by a non-diagonal matrix. Nevertheless, the solution admits an accelerator formulation where the accelerator matrixMhas a struc-ture very similar to that in Eqs. (3.19) and (3.20). Again, for (e$cient) estimation one can exploit cross-equation restrictions between M and the production function parameters and properties ofM(see Madan and Prucha, 1989).

4. Concluding comments

In this paper we have considered a generalized version of Kollintzas'(1985) symmetric LRE model which allows for multiple lags in the transition equations of the state variables. We have shown that this model admits a unique stable solution and we have derived a representation of that solution that is as close as possible to a closed form. We have also seen that the stability condition given in Kollintzas (1985) still applies to the solutions of the generalized model. Kollint-zas has shown that his condition is equivalent to a discrete time version of the stability conditions that can be found in Magill (1977) and Magill and Scheink-man (1979). These conditions impose a bound on the bene"t-cost ratio of deviations from the equilibrium stock levels.

12This does not undo the possibility to impose cross-equation restrictions though, when the CFS is estimated as part of a system of equations.

13This adjustment cost function can be found in Madan and Prucha (1989). In thesymmetric model presented in Section 2, the adjustment costs stem fromcompletedinvestments, that is changes in the stocks of productive inputs (x). We notice that the so-called symmetric LRE with multiple lags isasymmetric in the decision variables unless the gestation lags (theh

(15)

By allowing for multiple lags, the LRE model discussed in this paper yields solutions that follow VARMA processes with more general MA parts. Although the restrictions between the MA parameters of the CFS and the structural parameters can be too complicated to be exploited, the restrictions on the AR parameters can be imposed. Indeed, estimation of the demand equations to-gether with the production function would allow imposition of cross-equation restrictions to obtain more precise estimates of the parameters.

Acknowledgements

This research was partly sponsored by the Economics Research Foundation, which is part of the Netherlands Organization for Scienti"c Research (NWO). The author is indebted to A. Kirman, A. Perea y MonsuweH, and S. Schim van der Loe!, and to two anonymous referees for helpful suggestions. All errors are solely mine.

Appendix A

A.1. Proof of Eq. (3.4)

Eq. (3.4) is the result of the following computations. The system we are investigating is

E

t¸~JP(¸)yt#ut"0, (3.2)

substituting

P(¸)" +J

j/~J

A

j¸J~j (3.3)

and lagging the systemJperiods gives

E

t~J J

+

j/~J

A

j¸J~jyt#ut~J"0

Q P(¸)yt#

G

Et~J+J

j/1

A

j¸J~jyt

H

!+J j/1

A

j¸J~jyt#ut~J"0

Q P(¸)yt#+J

j/1

A

(16)

recalling thatejt"E

changing the order of summation yields

P(¸)y

Starting with the result just shown, we have

(17)

which can be written as

Appendix B. Proof of Lemma 2

The proof of Lemma 2 consists of three steps. First we will demonstrate that the restrictions of Lemma 1 do not involve revision processes that are di!erent from those mentioned just below Eq. (3.4). Next we show that (1!hF)F#+Fi/1hi restrictions are redundant. Finally, we look at a system which includes the e!ective restrictions. To carry out the "rst step, we will concentrate on the restriction which corresponds to taking expectations with respect toX

t~J`1andXt~Jand taking di!erences afterwards, since this

restric-tion contains the revision processes with the most distant horizons and we could expect new revision processes to show up here, if anywhere.

Consider, without loss of generality, theith row of¸~h1P(¸)mt~J,

The most recent observation of thekth production factory

kthat appears in (B.1)

isy

k,t~hk`hF. To show this we consider the following two cases.

If hk'h

i#1 the most recent observation of yk in (B.1) is part of the "rst

double sum and corresponds to the smallest value ofhfor whichA

~h,ikO0, that

is toh"h

k!hi!1. Note that in this caseAh,ik"0 forh3N. Ifhk4hi#1 the

most recent observation ofy

kin Eq. (B.1) is part of the second double sum and

corresponds to the highest value of h for which A

h,ikO0, that is to

h"h

i!hk#1. Thus the revision processes with the most distant horizons are

eh1~hk

k ,k"1,2,F. But they are included in the set of those mentioned below

Eq. (3.4). This observation completes the"rst step.

The redundancy is easily demonstrated. Again we look at the ith row of

¸~h1P(¸)[mt~J!u

t~J]. Sincemt~J!ut~Jcontains information only until time

t!J, taking expectations of theith row of¸~h1P(¸)[mt~J!u

t~J] with respect

toX

t~hi`hF~1is equivalent to taking expectations of theith row with respect to

X

t~hi`hF~1`j for each j3N. So at least (1!hF)F#+Fi/1hi restrictions are

(18)

system, whereqH"min(q,J!1)

expressions, fori"1,2,F. Apart from the restrictions in Eq. (3.10), system (B.2)

also includes!h

FF#+Fi/1hi incorrect but noninformative equations which

therefore can be ignored.14

In order to prove that the remaining restrictions from Eq. (3.10) are e!ective, it will now be shown that generally the matrix at the LHS of (B.2) has maximum rank. The Laplace development of the determinant of the matrix that is obtained by leaving out the lastFcolumns, includes the product of the diagonal entries of that matrix, which are the diagonal entries of allA

0matrices. Only the diagonal

entries ofA0depend on the diagonal entries of E. Since the sum of their powers in other terms of the Laplace development is lower, the determinant di!ers from zero in general.15But if all rows in (B.2) are independent, then theh1F!+Fi/1hi

true restrictions are also independent. Finally, from the"rst step of the proof, where it was shown that no revision processes are included in Eq. (3.10) other than those mentioned below Eq. (3.4), it is immediately clear that by taking expectations of Eq. (3.9) with respect toX

t~J~jwithj3N one does introduce

14The vector [(e1t)@2(etJ~1)@]@in (B.2) also includes!h

FF#+Fi/1hirevision processes which are

not mentioned below Eq. (3.4). They only enter the false&restrictions', i.e. the restrictions which do not emanate from Eq. (3.10), so the number of additional&restrictions'in (B.2) equals the number of (additional) revision processes introduced in these equations. It can easily be veri"ed that the additional equations in Eq. (B.2) do not constrain the revision processes mentioned below Eq. (3.4), by looking at the diagonal of the matrix at the LHS of (B.2) which is comprised ofA

1-matrices.

Generally, all diagonal entries ofA

1are di!erent from zero sinceGis positive de"nite and it is

reasonable to assume thatG

ii!HiiO0∀i. Now, each of the!hFF#+Fi/1hiadditional revision

processes is multiplied by such an entry. This also enables us to locate all the false&restrictions'.

(19)

new revision processes but also that the number of free revision processes remains constant: one can derive a new restriction for each new revision process that is introduced. h

Appendix C. Proof of Lemma 3

Suppose this factorization is possible. Then N(j)"N

0#N1j#2#

N

wjw"(1!Cj)(NM 0#NM 1j1#2#NM w~1jw~1)"NM 0#(NM 1!CNM 0)j#

(NM

2!CNM 1)j2#2#(!CNM w~1)jw. Equating powers ofjgives

N

0"NM 0, Ni"NM i!CNM i~1, i"1,2,w!1, Nw"!CNM w~1.

(C.1)

It follows that +wi/0Cw~iN

i"(Cw!Cw~1C)NM 0#(Cw~1!Cw~2C)NM 1#

2#(C!IC)NM w~1"0. But condition (3.12) is also su$cient. We propose a factorization and next verify its validity. Use the formulae for N

0 through

N

w~1 in (C.1) recursively to obtain NM 0,NM 1,NM 2,2,NM w~1. Then NM w~1" +w~1

i/0Cw~1~iNi. Premultiplying both sides by!Cand using condition (3.12)

yields !CNM

w~1"Nw which is in agreement with the last equality in

(C.1). h

Appendix D. Proof of Theorem 1

The proof of Theorem 1 proceeds as follows. First we will present and prove a lemma that would imply Theorem 1 immediately if the entries of=were linear functions of the entries ofE,GandH. Then we will show that Theorem 1 also holds more generally as it has been formulated. We will deal with nonlinearities by conditioning such that for given values of particular functions of the para-meters, the matrix is linear in the free parameters. Then Lemma 4 applies once again. The proof is completed by showing that the lemma can be applied for arbitrary values of these functions.

Lemma 4.16 Let p:RnPR be a polynomial in n variables and let pI0. Then

M(p),Mx3RnDp(x)"0Nhas Lebesgue measure zero.

Proof of Lemma 4. The proof proceeds by induction over the degree ofp. Let the

degree of p equal 0. Then p is a constant. As pI0,p has no roots. Thus M(p)"0, and has Lebesgue measure zero.

(20)

Induction step: Let the degree ofpequalr51 and let the lemma be true for pwith degree 4r!1.

Since the degree of p is at least 1, there is ani3M1,2,nN with Lp/LxiI0.

Assume w.l.o.g. that Lp/Lx

nI0.

De"ne the functionf(x) :"

C

x

1

x

2

:

x

n~1

p(x

1,2,xn)

D

.

Becausep(x)"0 for allx3M(p), it follows thatf(M(p))-Rn~1]M0N. Iffis a di!erentiable function, which is locally invertible almost everywhere, i.e. invertible for allx3RnCM

0whereM0is a set that has measure zero, thenfis

a local di!eomorphism almost everywhere. As will be shown below, we can coverM(p)CM

0by a countable union of open sets on each of whichfis a (global)

di!eomorphism. The image of the intersection of each of these open sets with M(p)CM

0underfis a subset ofRn~1]M0N, which has measure zero inRn. Iffis

a di!eomorphism onRnCM

0, it follows that the intersection of each open set

withM(p)CM

0has Lebesgue measure zero. SinceM(p)CM0is a countable union

of sets with Lebesgue measure zero,M(p)CM0has measure zero. From the fact thatM0has measure zero, it follows thatM(p) has measure zero.

Thus it remains to show thatfis di!erentiable and locally invertible and that M(p)-Xi|IB

i, where theBi's are open sets on whichf is a global

di!eomor-phism andIis a countable set.

It is easy to verify thatfis di!erentiable. Next we focus on the property of local invertibility.

The derivative of f,D

xfis

D

xf"

1 0 .. .. 0

0 1 0 .. 0

F } } } F

0 2 0 1 0

* * 2 * LLxp

n

.

Thus det(D

xf)"Lp/Lxn. This is a function ofx1,2,xn. We know thatLp/Lxnis

a polynomial in Rn of degree 4r!1. From the induction assumption, it follows that M

0"Mx D Lp/Lxn"0N has measure zero. Thus det(Dxf)O0

∀x3RnCM

0. Applying the implicit function theorem, it follows thatfis locally

invertible onRnCM

(21)

To show that we can coverM(p) by a countable union of open sets on which fis a global di!eomorphism, we will"rst prove thatM(p) is a closed set.M(p) is a closed set i!every convergent sequencexk3M(p) converges to a limit inM(p). Now letxkbe a sequence inM(p) converging tox. Thusf(xk)"0∀k. From the fact thatfis continuous, it follows thatf(x)"0.

We can always coverM(p) by the union of open sets on each of which fis a di!eomorphism. We will show that we can"nd a countable number of such sets. De"neMk,M(p)WCkwhereCk"Mx3RnD DDxDD=4kN.

SinceCkis compact andM(p) is closed,Mkis compact. Thus we can "nd a cover (B

i)i|Ik for Mk, where Ik has a "nite number of

elements. It follows thatM(p)"6

kMk-6k6i|IkBi"6i|IKBi, whereIK"6kIk

is a countable set. h

Proof of Theorem 1. The determinant of a general matrix Nis a polynomial,

where the entries of the matrix are the variables. From Lemma 4 it follows that the space of values of the entries of N for which det(N)"0 has Lebesgue measure zero. The same lemma can be invoked when the entries are linear functions of (hyper-)parameters even if the number of parameters is smaller than the number of entries.

Since the entries of the lastFrows of=are nonlinear functions of the entries ofE,G, and H, Theorem 1 has not been proven yet.

From Eq. (3.15) we know that the matrices B

i, i"0,2,J!1, depend in

a nonlinear fashion onK`,S, andf. These matrices in turn depend on (C~1)D (see the discussion following Eqs. (3.6) and (3.11)), whereCandDare linear in (the entries of )E,G, andH, givenc(see the discussion following Eqs. (2.1) and (3.3)). Thus = depends onC and D. De"ne X,!(C~1)D. The conditions imposed on E, G and H (see the discussion following Eq. (2.1)) imply that C,E#G!2H#(1/c)Gis PDS andD,G!His nonsingular and therefore X is nonsingular. By"xing two matrices from the setMC,D,XN(i.e. by "xing the values of the entries in two matrices from the setMC,D,XN) without violating the conditions following Eq. (2.1), for instanceCandX, we"x the third matrix and we"x=.

By"xingXwe impose restrictions onCandD(E,G, andH) asCX#D"0 must hold. Substitute these restrictions in Eq. (3.17) by replacingDby!CX. Then given X (and c), = is linear in C. According to Lemma 4, M

W,c has

Lebesgue measure zero as long as det(=) is not trivially zero. This is indeed not the case for almost all choices ofX, as will be shown now.

To this end we will develop the determinant of =. Some terms in the Laplace development consist only of the product of the entries on the diagonals of theA

1-matrices andFentries ofB0. The sum of all these terms will be shown

(22)

This can easily be checked in the special case whereE"0,G"GI, andH"0, whereGI is an arbitrary PDS matrix. See Appendix E. From this simple example it follows that det(=) is not (always) the null function. Now let us return to the case of an arbitrary matrixX.

So given X, det(=) is not the null function unless a particular choice for Xnulli"es it to start with. Only for a very exceptional choice forX (a choice from a set of measure zero within the space of all possible choices for X) we might expect this to happen as will be shown now.

Observe that when we "x X, C (or D) is still unrestricted; to be able to apply Lemma 4 given a particular value ofX, it su$ces to show that det(=)O0 for one particular choice of the value ofC(orD). As has already been indicated above, this will be done by looking for (algebraically) unique terms in the Laplace development of det(=). We will focus on the case where C"diag (c

11,2,cFF). Next we replace the cii's by entries from X and D.

Note that the entries from X and D have to satisfy the constraints X#diag (c

11,2,cFF)~1D"0. Note further that X can still take any value

by choosingDappropriately and when the diagonal entries of Care free but those of D are not, by choosing the c

ii's and the nondiagonal entries of

Dappropriately.

In the remainder of the proof we will also exploit the fact that thed

ii's only

show up in the diagonals of A

~1,A0, and of A`1, and in B0 but not in

B

i,∀iO0. As the latter is not obvious, we will prove thatdff,f"1,2,Fshow

up inB

0but not inBi,∀iO0 , in Appendix F. Specifying only those terms that

involve (one of ) thed

ii's,B0equals+Fk/1(K`)g~hk`hF(cS@f)~1AM h1~hk`1,k#2.

Now we will show that there exists a unique part in the Laplace development of det(=), LD-part for short, which is the sum of all LD-terms that include the factor<Fk/1dJkk~1originating from the"rst (J!1)Frows of=, and the factor

<Fk/1d

kkoriginating from the lastFrows. This LD-part is meant to be unique in

the sense that there are no other terms in the Laplace development of=that include the factor<Fk/1dJkk. If such a unique LD-part exists and if the values of thed

ii's can be chosen freely givenX, then det(=)O0.

Note that because the"rstF(J!1) rows of=only haveJ!1 rows which involve a speci"c d

ii, i"1,2,F, the highest power of dii that could be

encountered in the part of an LD-term that originates from the "rst F(J!1) rows, isJ!1. Furthermore the lastFrows of=contribute at most F d

ii's to an LD-term, which will all be di!erent and originate form the

"rstFcolumns, that is, fromB

0.

Thus in order to obtain an LD-term that includes <Fk/1dJkk both parts of the = matrix } the upper F(J!1) rows and the lower F rows } must contribute as manyd

ii's as possible (in factJ!1 and 1 are upperbounds on the

powers of the d

ii's that are contributed to the LD-terms by the "rstF(J!1)

(23)

di!erentd

ii's, while inB0, diishows up in theith column in every row (entry), for

i"1,2,F. The d

ii's, that are contributed by the last F rows to the unique LD-part

necessarily originate from the "rst F columns of =, that is, from B

0. As

a consequence theJ!1d

ii's, fori"1,2,F, all have to come from=M, which

is a F(J!1)]F(J!1) matrix. This in turn uniquely determines the origin of thed

ii's in the"rstF(J!1) rows as they all have to come from the diagonals of

theA

1matrices in=M.

Given the unique origin of the d

ii's in the LD-part, that come from the

"rst F(J!1) rows, we are still left with many possible combinations of the entries in B0 (which correspond to d

ii's), as all possible combinations

result in<Fk/1d

kk. Because we cannot pursue our search for a unique LD-term

further at this point, we add up all LD-terms that involve<Fk/1dJkk. The next step is to show that these terms almost always, that is for almost all X, do not counterbalance each other. As the factor in these LD-terms that comes from the"rstJ(F!1) rows is always the same, namelycF(J~1)<Fk/1dJ~1

kk ,

we focus on the factors in the LD-terms which originate from B

0. De"ne the

following matrix>"[y

ij],yij"[(cS@f)~1]ij(j`i )g~hj`hF. This matrix comprises

the`coe$cientsaof thed

ii's in the LD-terms that originate fromB0. The sum of

all coe$cients of the<Fk/1d

kkfactors in the LD-terms which originate fromB0,

equals det(>). Thus a necessary and su$cient condition for the existence of a unique LD-part that is characterized by the factor <Fk/1dJkk, i.e. a condition for this part of the LD not to equal zero, is that det(>)O0 (apart from the trivial condition that givenXthed

ii's do not equal zero). We will now

show that det(>)O0 for almost allX. Notice that> is a function of X and Xalone.

For each matrixXwe have a pairf, S~1, the diagonal matrix with the unique eigenvalues ofXand a matrix of eigenvectors ofX, respectively, and vice versa.

fkkalone completely determinesj`k for eachk"1,2,F. SinceXcan take any value,fandScan take any value and vice versa. If we have chosen the values of thefkk's and thereby have"xed thej`k's,>can still take any arbitrary value by choosingSappropriately. In fact given (almost all)f, there is an injective relation between>andS. Thus givenf"fH, for almost allXwith the eigenvalues equal tofH

kk, k"1,2,F, or equivalently for almost allS, det(>)O0. For almost all

values offH

kk, k"1,2,F, this argument applies. Furthermore by letting thefHkk's

take every possible value inR,k"1,2,F, and by choosingSgivenfH

appro-priately, all X's are covered (can be generated). Thus for almost all X, the condition det(>)O0 is satis"ed. As a consequence for almost all X there is a unique part in the Laplace development of = that depends on <Fk

/1dJkk.

(Recall that it is unique because it is the only LD-part that involves<Fk/1dJkk). Because thed

ii's can be chosen freely, we can choose their values, givenX, in

such a way that det(=)O0. So for almost allX, det(=) is not trivially zero and by Lemma 4, M

(24)

entries ofXfor which det(=) is the null function has Lebesgue measure zero, M

W,chas Lebesgue measure zero unconditionally. h

Appendix E. The determinant of=

We examine det(=) whenE"0,G"GI, andH"0, whereGI is an arbitrary PDS matrix.

In this case C"[(1#c)/c]GI and D"GI . It follows that B

0"

![(1#c)/c]diag(g811,2,g8FF) andBi"0 fori"1,2,J!1. Partition=as

="

C

=1 =m B

0 =2

D

.

By the theorem on the partitioned inverse, (the absolute value of ) the determi-nant of=equals

D=D"D=

mDDB0!=2=~1m =1D.

Since B

i"0 fori"1,2,J!1 we have =2"0. Thus DB0!=2=~1m =1D"

DB

0D"<Fi/1B0,iiO0 because D"GI is nonsingular. It is easily seen from the

de"nitions that the product of the diagonal entries of=

mis a unique term in the

Laplace development of det(=

m). Thus in generalD=mDO0 as well. It follows

that det(=)O0 for almost all choices ofGI whenE"0,G"GI, andH"0.

Appendix F

d

ff, f"1,2,F, show up inB0but not inBi, iO0.

The proof proceeds as follows: The d

ff's (could only) enter the formulae of the Bi's through Kji`1,

j"i,2,J!1,i"0,2,J!1, which are sums ofAM

k1matrices. Nowcdffonly

appears as the (f,f)-entry of AMh1~hf`1,i, f"1,2,F. We will show that these matrices are only part ofB

0. Abstracting from factors that only depend onX,

theB

imatrices equal J~1

+

j/i

K

ji`1"

J~1 +

j/i fM(j)

+

k/1

AM

J`i~j,k.

Suppose that AMh1~hf`1,i enters B

i, then J#i!j"J#hF!hf,k"f. It

follows thati"hF!hf#j andfM(j)5f. From the latter we havej4hf!hF, and thereforei40. Thusd

(25)

References

Broze, L., GourieHroux, C., Szafarz, A., 1995. Computation of multipliers in multivariate rational expectations models. Econometric Theory 11, 229}257.

Cassing, S., Kollintzas, T., 1991. Recursive factor of production interrelations and endogenous cycling. International Economic Review 32, 417}440.

Christiano, L., 1990. Linear-quadratic approximation and value-function iteration: a comparison. Journal of Business and Economic Statistics 8, 99}113.

Epstein, L.G., Yatchew, A.J., 1985. The empirical determination of technology and expectations. Journal of Econometrics 27, 235}258.

Johnson, P.A., 1994. Capital utilization and investment when capital depreciates in use: some implications and tests. Journal of Macroeconomics 16, 243}259.

Kollintzas, T., 1985. The symmetric linear rational expectations model. Econometrica 53, 963}976. Kollintzas, T., 1986. A non-recursive solution for the linear rational expectations model. Journal of

Economic Dynamics and Control 10, 327}332.

Kydland, F.E., Prescott, E.C., 1982. Time-to-build and aggregate#uctuations. Econometrica 50, 1345}1370.

Madan, D.B., Prucha, I.R., 1989. A note on the estimation of nonsymmetric dynamic factor demand models. Journal of Econometrics 42, 275}283.

Magill, M.J.P., 1977. Some new results on the local stability of the process of capital accumulation. Journal of Economic Theory 15, 174}210.

Magill, M.J.P., Scheinkman, J.A., 1979. Stability of regular equilibria and the correspondence principle for symmetric variational problems. International Economic Review 20, 297}315. Montgomery, M.R., 1995. &Time-to-build' completion patterns for nonresidential structures,

1961}1991. Economics Letters 48, 155}163.

Oliner, S., Rudebusch, G., Sichel, D., 1995. New and old models of business investment: a compari-son of forecasting performance. Journal of Money, Credit and Banking 27, 806}826. Peeters, H.M.M., 1995. Time-to-Build and Interrelated Investments and Labour Demand under

Rational Expectations. Springer, Heidelberg.

Rossi, E.R., 1988. Comparison of dynamic factor demand models. In: Barnett, W., Berndt, E., White H. (Eds.), Dynamic Econometric Modeling. Cambridge University Press, Cambridge, pp. 357}376.

Referensi

Dokumen terkait

LEMBAR PERNYATAAN... iii KATA PENGANTAR... iv UCAPAN TERIMAKASIH... Stuktur Organisasi Skripsi... BAB II LANDASAN TEORETIS. A.. Gina

Perkembangan jumlah wajib pajak restoran yang terdaftar di Dinas Pendapatan Kota Tebing Tinggi setiap tahun menunjukkan fluktuasi peningkatan dalam kuantitas wajib pajak, hal

Pendidikan, investasi SDM, dan pembangunan ; isu, teori, dan aplikasi untuk pembangunan pendidikan dan sumberdaya manusia Indonesia.. Bandung : Widya

PENGARUH PRICE FAIRNESS TERHADAP KEPUTUSAN MENGINAP TAMU BILIQUE HOTEL BANDUNG (survey pada tamu free individual traveler yang menginap di bilique hotel).. Universitas

To take advantage of the great potential the biological sciences offer, the National Institutes of Health, the National Science Foundation, and the Department of Energy

PENGARUH PRICE FAIRNESS TERHADAP KEPUTUSAN MENGINAP TAMU BILIQUE HOTEL BANDUNG (survey pada tamu free individual traveler yang menginap di bilique hotel).. Universitas

(SIUP) yang sesuai bidang/sub bidang Pengadaan Kain/Tekstil: - KBLI Tahun 2005 : Perdagangan Eceran Tekstil (52321) - KBLI Tahun 2009 : Perdagangan Eceran

Adapun kelengkapan yang harus dibawa pada saat Klarifikasi dan Pembuktian Dokumen Kualifikasi adalah data isian Saudara yang tercantum pada aplikasi Isian Kualifikasi (SPSE)