Problems
4.4 The Spectral Density of an ARMA Process
4.4.1 Rational Spectral Density Estimation
An alternative to the spectral density estimator of Definition 4.2.2 is the estimator obtained by fitting an ARMA model to the data and then computing the spectral density of the fitted model. The spectral density shown in Figure4-14can be regarded as such an estimate, obtained by fitting an AR(2) model to the mean-corrected sunspot data.
Provided that there is an ARMA model that fits the data satisfactorily, this proce- dure has the advantage that it can be made systematic by selecting the model according (for example) to the AICC criterion (see Section5.5.2). For further information see Brockwell and Davis (1991), Section 10.6.
Problems
4.1 Show that
& π
−π
ei(k−h)λdλ=
(2π, ifk=h, 0, otherwise.
4.2 If{Zt} ∼ WN 0, σ2
, apply Corollary 4.1.1to compute the spectral density of {Zt}.
4.3 Show that the vectorse1, . . . ,enare orthonormal in the sense of (4.2.3).
4.4 Use Corollary 4.1.1 to establish whether or not the following function is the autocovariance function of a stationary process{Xt}:
γ (h)=
⎧⎪
⎪⎪
⎪⎨
⎪⎪
⎪⎪
⎩
1 ifh=0,
−0.5 ifh= ±2,
−0.25 ifh= ±3, 0 otherwise.
4.5 If{Xt}and{Yt}are uncorrelated stationary processes with autocovariance func- tionsγX(·)andγY(·)and spectral distribution functionsFX(·)andFY(·), respec- tively, show that the process {Zt = Xt+Yt}is stationary with autocovariance functionγZ =γX+γY and spectral distribution functionFZ =FX+FY. 4.6 Let{Xt}be the process defined by
Xt=Acos(πt/3)+Bsin(πt/3)+Yt, whereYt = Zt +2.5Zt−1,{Zt} ∼ WN
0, σ2
, Aand Bare uncorrelated with mean 0 and variance ν2, andZt is uncorrelated withA and B for each t. Find the autocovariance function and spectral distribution function of{Xt}.
4.7 Let{Xt}denote the sunspot series filed as SUNSPOTS.TSM and let{Yt}denote the mean-corrected seriesYt = Xt−46.93,t = 1, . . . ,100. Use ITSM to find the Yule–Walker AR(2) model
Yt =φ1Yt−1+φ2Yt−2+Zt, {Zt} ∼WN 0, σ2
,
i.e., findφ1, φ2, andσ2. Use ITSM to plot the spectral density of the fitted model and find the frequency at which it achieves its maximum value. What is the corresponding period?
118 Chapter 4 Spectral Analysis
4.8 (a) Use ITSM to compute and plot the spectral density of the stationary series {Xt}satisfying
Xt−0.99Xt−3=Zt, {Zt} ∼WN(0,1).
(b) Does the spectral density suggest that the sample paths of{Xt}will exhibit approximately oscillatory behavior? If so, then with what period?
(c) Use ITSM to simulate a realization ofX1, . . . ,X60 and plot the realization.
Does the graph of the realization support the conclusion of part (b)? Save the generated series as X.TSM by clicking on the window displaying the graph, then on the red EXP button near the top of the screen. SelectTime SeriesandFilein the resulting dialog box and clickOK. You will then be asked to provide the file name, X.TSM.
(d) Compute the spectral density of the filtered process Yt= 1
3(Xt−1+Xt+Xt+1)
and compare the numerical values of the spectral densities of{Xt}and{Yt} at frequencyω=2π/3radians per unit time. What effect would you expect the filter to have on the oscillations of{Xt}?
(e) Open the project X.TSM and use the option Smooth>Moving Ave.
to apply the filter of part (d) to the realization generated in part (c). Comment on the result.
4.9 The spectral density of a real-valued time series{Xt}is defined on[0, π]by f(λ)=
(100, ifπ/6−0.01< λ < π/6+0.01, 0, otherwise,
and on[−π,0]byf(λ)=f(−λ).
(a) Evaluate the ACVF of{Xt}at lags 0 and 1.
(b) Find the spectral density of the process{Yt}defined by Yt:= ∇12Xt=Xt−Xt−12.
(c) What is the variance ofYt?
(d) Sketch the power transfer function of the filter∇12 and use the sketch to explain the effect of the filter on sinusoids with frequencies (i) near zero and (ii) nearπ/6.
4.10 Suppose that {Xt} is the noncausal and noninvertible ARMA(1,1) process sat- isfying
Xt−φXt−1=Zt+θZt−1, {Zt} ∼WN 0, σ2
,
where|φ|>1and|θ|>1. Defineφ(B)˜ =1− 1φBandθ (B)˜ =1+1θBand let {Wt}be the process given by
Wt := ˜θ−1(B)φ(B)X˜ t.
4.4 The Spectral Density of an ARMA Process 119
(a) Show that{Wt}has a constant spectral density function.
(b) Conclude that{Wt} ∼WN 0, σw2
. Give an explicit formula forσw2in terms ofφ, θ,andσ2.
(c) Deduce that φ(B)X˜ t = ˜θ (B)Wt, so that {Xt} is a causal and invertible ARMA(1,1) process relative to the white noise sequence{Wt}.
5 Modeling and Forecasting with ARMA Processes
5.1 Preliminary Estimation
5.2 Maximum Likelihood Estimation 5.3 Diagnostic Checking
5.4 Forecasting 5.5 Order Selection
The determination of an appropriate ARMA(p,q) model to represent an observed stationary time series involves a number of interrelated problems. These include the choice of p and q(order selection) and estimation of the mean, the coefficients {φi,i = 1, . . . ,p}, {θi,i = 1, . . . ,q}, and the white noise variance σ2. Final selection of the model depends on a variety of goodness of fit tests, although it can be systematized to a large degree by use of criteria such as minimization of the AICC statistic as discussed in Section 5.5. (A useful option in the program ITSM is Model>Estimation>Autofit, which automatically minimizes the AICC statistic over all ARMA(p,q) processes withpandqin a specified range.)
This chapter is primarily devoted to the problem of estimating the parameters φ = (φi, . . . , φp),θ =(θi, . . . , θq), andσ2when pand qare assumed to be known, but the crucial issue of order selection is also considered. It will be assumed throughout (unless the mean is believed a priori to be zero) that the data have been “mean- corrected” by subtraction of the sample mean, so that it is appropriate to fit a zero-mean ARMA model to the adjusted datax1, . . . ,xn. If the model fitted to the mean-corrected data is
φ(B)Xt =θ (B)Zt, {Zt} ∼WN 0, σ2
,
then the corresponding model for the original stationary series {Yt} is found on replacing Xt for each tby Yt−y, wherey = n−1 nj=1yj is the sample mean of the original data, treated as a fixed constant.
Whenpandqare known, good estimators ofφandθcan be found by imagining the data to be observations of a stationary Gaussian time series and maximizing the likelihood with respect to the p + q + 1 parameters φ1, . . . , φp, θ1, . . . , θq
© Springer International Publishing Switzerland 2016
P.J. Brockwell, R.A. Davis,Introduction to Time Series and Forecasting, Springer Texts in Statistics, DOI 10.1007/978-3-319-29854-2_5
121
122 Chapter 5 Modeling and Forecasting with ARMA Processes
andσ2. The estimators obtained by this procedure are known as maximum likelihood (or maximum Gaussian likelihood) estimators. Maximum likelihood estimation is discussed in Section 5.2 and can be carried out in practice using the ITSM option Model>Estimation>Max likelihood, after first specifying a preliminary model to initialize the maximization algorithm. Maximization of the likelihood and selection of the minimum AICC model over a specified range ofpandqvalues can also be carried out using the optionModel>Estimation>Autofit.
The maximization is nonlinear in the sense that the function to be maximized is not a quadratic function of the unknown parameters, so the estimators cannot be found by solving a system of linear equations. They are found instead by searching numerically for the maximum of the likelihood surface. The algorithm used in ITSM requires the specification of initial parameter values with which to begin the search. The closer the preliminary estimates are to the maximum likelihood estimates, the faster the search will generally be.
To provide these initial values, a number of preliminary estimation algorithms are available in the option Model>Estimation>Preliminaryof ITSM. They are described in Section 5.1. For pure autoregressive models the choice is between Yule-Walker and Burg estimation, while for models with q > 0 it is between the innovations and Hannan–Rissanen algorithms. It is also possible to begin the search with an arbitrary causal ARMA model by using the option Model>Specifyand entering the desired parameter values. The initial values are chosen automatically in the optionModel>Estimation>Autofit.
Calculation of the exact Gaussian likelihood for an ARMA model (and in fact for anysecond-order model) is greatly simplified by use of the innovations algorithm. In Section5.2we take advantage of this simplification in discussing maximum likelihood estimation and consider also the construction of confidence intervals for the estimated coefficients.
Section5.3deals with goodness of fit tests for the chosen model and Section5.4 with the use of the fitted model for forecasting. In Section5.5we discuss the theoretical basis for some of the criteria used for order selection.
For an overview of the general strategy for model-fitting see Section6.2.