• Tidak ada hasil yang ditemukan

Directory UMM :Data Elmu:jurnal:E:Economics Letters:Vol71.Issue2.May2001:

N/A
N/A
Protected

Academic year: 2017

Membagikan "Directory UMM :Data Elmu:jurnal:E:Economics Letters:Vol71.Issue2.May2001:"

Copied!
7
0
0

Teks penuh

(1)

www.elsevier.com / locate / econbase

Factor ARMA representation of a Markov process

a ,

*

b c

´

Serge Darolles

, Jean-Pierre Florens , Christian Gourieroux

a

´ ´ ´ ´

Societe Generale Asset Management, Hedge Fund Quantitative Research, and CREST, Laboratoire de Finance

Assurance, 15 Boulevard Gabriel Peri, Batiment Malakoff 2, Timbre J035, 92245 Malakoff Cedex, France b

GREMAQ and IDEI, Toulouse, France c

CEPREMAP and CREST, Laboratoire de Finance Assurance, Malakoff, France Received 7 July 2000; accepted 14 November 2000

Abstract

`

We decompose a stationary Markov process (X ) as: Xt t5a01oj51a Z , where the Z ’s processes admitj j,t j

ARMA specifications. These decompositions are deduced from a nonlinear canonical decomposition of the joint distribution of (X , Xt t21).  2001 Elsevier Science B.V. All rights reserved.

Keywords: Markov process; Reversibility; Dynamic factors; Nonlinear; Canonical analysis

JEL classification: C14; C22

1. Introduction

The aim of this note is to decompose a stationary Markov process (X ) as:t

`

Xt5a01

O

a Zj j,t (1.1)

j51

where the Z ’s processes admit ARMA specifications. These decompositions are deduced from aj nonlinear decomposition of the joint distribution of (X , Xt t21).

More precisely we assume:

Assumption A.1. (X ) is a stationary Markov process, with continuous joint distribution and marginalt

p.d.f., denoted by f(x , xt t21), and f(x ), respectively.t

*Corresponding author. Tel.: 133-1-5637-8333; fax:133-1-5637-8665.

´

E-mail addresses: darolles@ensae.fr (S. Darolles), florens@cict.fr (J.-P. Florens), gouriero@ensae.fr (C. Gourieroux). 0165-1765 / 01 / $ – see front matter  2001 Elsevier Science B.V. All rights reserved.

(2)

Assumption A.2.

2

f (x , xt t21)

]]]]

E

dx dxt t21, 1 ` f(x ) f(xt t21)

Under Assumptions A.1–A.2 the joint p.d.f. can be decomposed as (see Lancaster, 1968):

`

f(x , xt t21)5f(x ) f(xt t21) 1

F

1

O

l wj j(x )t cj(xt21)

G

(1.2)

j51

where the canonical correlationslj, j varying, are ranked in decreasing order and take values between 0 and 1. The current and lagged canonical variates wj, j varying, and cj, j varying, respectively, satisfy the restrictions:

E[wj(X )]t 5E[cj(X )]t 50,;j (1.3)

2 2

E[wj(X )]t 5E[cj(X )]t 51,;j (1.4)

E[w(X )w(X )]5E[c(X )c(X )],;j±k (1.5)

j t k t j t k t

In Section 2 we consider the case of a reversible Markov process and use the canonical decomposition to exhibit factors Z with AR(1) dynamics. The general case is studied in Section 3.j

2. The reversible case

Let us consider a reversible stationary Markov process, i.e. a process with identical distributional properties in initial and reversed times. The reversibility condition implies the symmetry of the joint distribution: f(xt, xt21)5f(xt21, x ). Hence, under Assumptions A.1–A.2, the stationary Markovt process is reversible if and only if the assumption below is satisfied.

Assumption A.3. The canonical variates satisfy wi5 6ci, ;i$1.

A reversible Markov process admits a simple factor autoregressive representation.

Proposition 2.1. Under Assumptions A1– A3, a Markov process (X ) can be decomposed as:

t

`

Xt5a01

O

a Zj j,t (2.1)

j51

where the Z ’s processes satisfy:j

Zj,t5ljZj,t211uj,t (2.2)

2

with E u

f

uX

g

50, and Cov[u , u ]5

s

12l d

d

, ;j,l, where d is the Kronecker symbol.

j,t t21 j,t l,t j jl jl

(3)

2

Proof. From the linear decomposition of any function of L ( f ) in the orthonormal basis of canonical

variates, we get:

Let us define Zj,t5wj(X ). By applying the canonical decomposition (1.2) we get:t

`

Example 2.1. An AR(1) gaussian process is an example of reversible process. The canonical

decomposition of the joint gaussian p.d.f. with zero mean, and covariance matrix:

2 2

s rs

F

2 2

G

rs s

with r.0 is such that (see e.g. Wiener, 1958, lecture 5; Wong and Thomas, 1962):

j

up to a joint change of sign, where the Hermite polynomials H ’s are defined by:j

j! m j22m

]]]]]

H (x)j 5

O

( j22m)!m!2m(21) x (2.3)

(4)

2 3

The first Hermite polynomials are: H (x)1 5 2x, H (x)2 5x 21, H (x)3 5 2x 13x. For a negative autocorrelation, we only have to replace Y by 2Y to deduce that the canonical correlations are

1 x 1 x

Example 2.2. Autoregressive gamma processes are useful for specifying time dependent duration

models. In this case the conditional distribution of the Markov process is such that X /c follows thet noncentral gamma distributiong(d, bXt21). The process is stationary if ubcu,1, and the associated

12bc ]]]

marginal distribution is such that c X follows the centered gamma distributiont g(d, 0). The ´

canonical decomposition of the joint distribution of (X , Xt t21) is such that (see Gourieroux and Jasiak, 2000a):

where L is a generalized Laguerre polynomial:j

j k

Example 2.3. Other examples of reversible Markov processes are discretized unidimensional

diffusion processes (see e.g. Hansen et al., 1998) or one to one transformations of AR(1) gaussian process (see e.g. Granger and Newbold, 1976).

As an illustration of this property let us consider the limiting case corresponding to a1.0 and aj50,

;j.1, and assume distinct eigenvalues li. The factor decomposition (2.1) becomes:

X 5a 1a Z 1v (2.4)

t 0 1 1,t 1,t

where the error term v is a martingale difference sequence and the Z process satisfies the

1,t 1

autoregressive relation:

Z1,t5l1Z1,t211u1,t (2.5) In general, the error terms uj,t are conditionally heteroscedastic. More precisely, let us introduce the linear decomposition of the squared canonical variates:

(5)

2

Hence, the error terms uj,t are conditionally homoscedastic if and only if c (j,i li2lj)50,

2

;i$1, which is satisfied if there exists i , i $1, with c 50, ;i±i and l 5l .

0 0 j,i 0 i0 i0

Corollary 2.1. Under Assumptions A.1– A.3, the predictions of the transformed variable at various horizons is:

`

h

E

f

wsXt1hduXt

g

5b01

O

lib Zi i,t (2.7)

] i51

Proof. The factor decomposition is valid for any transformation of the process:

`

w(X )t 5b01

O

b Zi i,t (say)

i51

and we deduce immediately the predictions of the transformed variable at any horizon. h

The factor decomposition introduced above can be used to compare the linear and nonlinear predictions of a reversible Markov process (see Donelson and Maltz, 1972 and Granger and Newbold, 1976 for nonlinear transformations of gaussian processes). Indeed let us consider the factor decomposition of the process. The nonlinear prediction is:

`

E X

f

tuXt21

g

5

O

ljajwjsXt21d (2.8)

j50

whereas the quadratic prediction error is:

`

2 2

gNL5

O

s

12lj

d

aj (2.9)

j51

On the other hand, the linear prediction is easily computable and takes the form:

ˆ

We directly note from (2.9) and (2.11) that:

` 2

gL2gNL5

O

a V (j a l).0

j51

2

where V (a l) is the variance of the canonical correlations lj computed with the weights a .j

(6)

`

h

f X

s

t1huXt

d

5f Xs t1h,.d

F

11

O

l wj jsXt1hd s dwjXt

G

(2.12)

j51

3. The general case

In the general case a Markov process also admits a factor decomposition, but the factor dynamics is more complicated as shown in the proposition below.

Proposition 3.1. Under Assumptions A.1– A.2, a Markov process (X ) can be decomposed as:

t

`

Xt5a01

O

a Zj j,t (3.1)

j51

where the Z ’s processes satisfy:j

`

linear decompositions of X and Zt j,t5cj(X ). We obtain the following dynamics from Proposition 2.1:t

˜

Zj,t5ljZj,t211uj,t

for the Z ’s processes appearing in the decomposition of X Using the decomposition formula forj t.

˜

since the Zj,t have zero mean. Finally, we obtain the factor dynamics equation. h

(7)

The error termsv and w are martingale difference sequences obtained by aggregating the effects

1,t 1,t

of the Zj,t variables for j$2. The dynamics of the Z process satisfies the ARMA1 (1,1)-type relation:

Z1,t5l1b Z11 1,t211u1,t1l1w1,t21 (3.6) More generally the process (X ) will satisfy a linear state space representation with pure ARMA statet variables if there is a finite number of nonzero eigenvalues.

References

Diebold, F., Gunther, T., Tay, A., 1997. Evaluating density forecasts. University of Pennsylvania, Discussion Paper. Donelson, J., Maltz, F., 1972. A comparison of linear versus nonlinear prediction for polynomial functions of the

Orastein–Uhlenbeck process. Journal of Applied Probability 9, 725–744. ´

Gourieroux, C., Jasiak, J., 2000. Autoregressive gamma processes, CREST DP.

Granger, C., Newbold, P., 1976. Forecasting transformed series. Journal of the Royal Statistical Society, B 38, 189–203. Granger, C., Pesaran, M., 1996. A decision theoretic approach to forecast evaluation. University of California, San Diego,

Discussion Paper.

Hansen, L., Scheinkman, J., Touzi, N., 1998. Spectral methods for identifying scalar diffusions. Journal of Econometrics 86, 1–32.

Lancaster, H., 1968. The structure of bivariate distributions. Annals of Mathematical Statistics 29, 719–736. Wiener, N., 1958. Nonlinear Problem in Random Theory. MIT Press, Cambridge.

Referensi

Dokumen terkait

Penelitian Belkaoui (1989) dalam Retno (2006) menemukan hasil, bahwa (1) pengung- kapan sosial mempunyai hubungan yang po- sitif dengan kinerja sosial perusahaan yang

- Land Use Change and Climate Change in Citarum River - Sea Level Rise in Indramayu and Subang... Banjir Jakarta

Gedung H, Kampus Sekaran-Gunungpati, Semarang 50229 Telepon: (024)

[r]

Sumber: Has il PeneJit ian 20 II (Data dio lah) Pada Tabel 9 nilai signifikan si pelayanan pada pelanggan se besar 0,000 < a 0,025, menunjukkan bahwa pelayanan

Pasal 1 UU Sisdiknas tahun 2003 menyatakan bahwa di antara tujuan pendidikan nasional adalah mengembangkan potensi peserta didik untuk memiliki kecerdasan, kepribadian dan

Belanja online adalah suatu bentuk perdagangan menggunakan perangkat elektronik yang memungkinkan konsumen untuk membeli

Elemen strategi dapat dilihat dari empat elemen lagi di dalamnya yaitu goal, scope, competitive basis, dan business model. Perusahaan memiliki tujuan atau goal.. untuk