• Tidak ada hasil yang ditemukan

Peter J. Brockwell Richard A. Davis Third Edition

N/A
N/A
Nguyễn Gia Hào

Academic year: 2023

Membagikan "Peter J. Brockwell Richard A. Davis Third Edition"

Copied!
428
0
0

Teks penuh

The package ITSMR from Weigt (2015) can be used in R to reproduce many of the features of ITSM2000. Some of the basic tools necessary for an understanding of continuous-time financial time series models (Brownian motion, Lévy processes and Itô calculus) have also been added as.

Examples of Time Series

If we do this with the values ​​of {Xt} shown in Figure 1-4 and retain only the lowest 3.5% of the frequency components, we get the signal estimate also shown as the dashed red line in Figure 1-4 . The waveform of the signal is quite close to that of the real signal in this case, although its amplitude is somewhat smaller.

Objectives of Time Series Analysis

Some Simple Time Series Models

Some Zero-Mean Models Example iid Noise

The time series obtained by repeatedly tossing a penny and recording +1 for each head and -1 for each tail is usually modeled as a realization of this process. Even a cursory inspection of the results from 1963–. starting at zero) is obtained by cumulatively summing (or "integrating") iid random variables.

Models with Trend and Seasonality

The estimated noise process values ​​Yt,1 ≤ t ≤ 21 are the residuals obtained by subtracting mˆt= ˆa0+ ˆa1t+ ˆa2t2vanxt. Smoothness of a time series' graph is generally indicative of the existence of some kind of dependency between the observations.

A General Approach to Time Series Modeling

Stationary Models and the Autocorrelation Function

The Sample Autocorrelation Function

This estimate may suggest which of the many possible stationary time series models is a suitable candidate to represent the dependency in the data. The use of the divisor ensures that the sample covariance matrix Ŵˆn:= [ ˆγ (i−j)]ni,j=1 is non-negative final (see Section 2.4.2).

A Model for the Lake Huron Data

For data that contains a trend,| ˆρ(h)|will exhibit slow decays as obstacles, and for data with a significant deterministic periodic component, | ˆρ(h)| will exhibit similar behavior with the same periodicity. (See the sample ACF of Australian red wine sales in Figure 1-14 and Problem 1.9.) Therefore, ρ(ˆ ·) can be useful as an indicator of non-stationarity (see also Section 6.1). The improved fit is indicated by the sample ACF of the estimated residuals,yt− ˆφ1yt−1− ˆφ2yt−2, which falls well within the limits ±1.96/√.

Estimation and Elimination of Trend and Seasonal Components

Estimation and Elimination of Trend in the Absence of Seasonality In the absence of a seasonal component the model () becomes the following

The study of the residuals (or of the differentiated series) is taken up in Section 1.6. a) Smoothing with a finite moving average filter. In the same way, any polynomial trend of degree k can be reduced to a constant by applying the operator ∇k (Problem 1.10).

Estimation and Elimination of Both Trend and Seasonality

The seasonally adjusted data are then determined to be the original series with the estimated seasonal component removed, i.e. Finally, we reestimate the trend from the deseasonalized {dt} data using one of the methods already described. The plot of the seasonally adjusted data suggests the presence of an additional quadratic trend function.

Testing the Estimated Noise Sequence

Thus, based on the value of S, there is insufficient evidence to reject the level 0.05 iid hypothesis. e). The sample value of the rank statistic is Pis 10,310, and the asymptotic distribution under the iid hypothesis (withn=200) is N.

Problems

Basic Properties

For example, the function κ(h)=cos(ωh) is nonnegative definite, since (see problem 2.2) it is the ACVF of the stationary process. One of the simplest ways to construct a time series{Xt} that is strictly stationary (and hence stationary ifEXt2<∞) is to "filter" an iid sequence of random variables.

Linear Processes

This is the content of the following theorem, the proof of which can be found in Brockwell and Davis (1991), Section 3.2. Theorem 2.1.1 If {Xt} is a stationary q-correlated time series with mean 0, then it can be represented as the MA(q) process in (2.1.7).

Introduction to ARMA Processes

The ARMA(1,1) process is therefore reversible, since Zt can be expressed in terms of the current and past values ​​of the process Xs, s ≤ t. If the ARMA(1,1) process{Xt}is non-causal or non-invertible with|θ|>1, then it is possible to find a new white noise sequence {Wt}such that{Xt} is a causal and non-reversible ARMA(1) is ,1) process relative to {Wt} (Problem 4.10).

Properties of the Sample Mean and Autocorrelation Function

The sample ACF plays an important role in the selection of appropriate models for the data. This result is the basis for the test that data is generated from iid noise using the sample ACF described in section 1.6.

Forecasting Stationary Time Series

  • Prediction of Second-Order Random Variables
  • The Durbin–Levinson Algorithm
  • The Innovations Algorithm
  • Prediction of a Stationary Process in Terms of Infinitely Many Past Values

From (2.5.8) the expected value of the prediction error Xn+h−PnXn+sy is zero, and therefore the mean squared prediction error. Then by the same arguments used in the calculation of PnXn+h, the best linear predictor of Y in terms of {1,Wn,. Xn (m < 0) is available to evaluate the best linear predictor of Xn+hin terms of1,Xm.

The Wold Decomposition

Open the SUNSPOTS.TSM project and click on the second yellow button at the top of the screen to view the graphs. Click OK and you will get a template of the form. for the series corrected with mean Xt =Dt−46.93. Compare the model and sample ACF and PACF by selecting the third yellow button at the top of the screen.

ARMA( p, q) Processes

A process {Xt} is said to be an ARMA(p,q)process with mean μ if {Xt−μ} is an ARMA(p,q)process. The time series {Xt} is said to be an autoregressive process of order p(or AR(p)) if θ (z)≡1, and a higher average process of order q(or MA(q)) if φ(z) ≡1. For the general ARMA(p,q) process, the analogous condition is that φ(z) = 0 for |z| ≤ 1, which means that all zeros of the autoregressive polynomial must be greater than 1 in absolute value.

The ACF and PACF of an ARMA( p, q ) Process

  • Calculation of the ACVF
  • The Autocorrelation Function
  • The Partial Autocorrelation Function
  • Examples

Thus, the ACVF of the MA(q) process has the characteristic property of vanishing at delays greater than q. This means that approximately 95 % of the sample PACF values ​​beyond lag p should fall within the limits ±1.96/√n. For the time series of overshorts, the data, through the graph of the ACF, leads us to the MA(1) model.

Forecasting ARMA Processes

This is a consequence of the innovation algorithm and the fact that κ(r,s)=0 if >mand|r−s|>q. Notice how the predictors converge fairly quickly to the process mean (ie, zero) as the lead time h increases. Use the fitted model to calculate the best prediction for the number of strikes in 1981.

Table 3.1 X ˆ n+1 for the ARMA(2,3) Process of Example 3.3.4
Table 3.1 X ˆ n+1 for the ARMA(2,3) Process of Example 3.3.4

Spectral Densities

Proposition 4.1.1 A real-valued function f defined on (−π, π]is the spectral density of a real-valued stationary process if and only if. Applying Proposition 4.1.1 (the assumptions are easily checked) we conclude that f is the spectral density of several autocovariance functions. A process with this spectral density is called white noise, since each frequency in the spectrum contributes equally to the variance of the process.

The Periodogram

The number of periodogram ordinates for which the average is calculated is approximately 2√n, and the width of the frequency interval over which the average is taken is approximately 4π/√n. Conditions (4.2.13) simply mean that the number of terms in the weighted average (4.2.12) goes to ∞ as n → ∞, while the width of the frequency interval over which the average is taken simultaneously shifts to zero. If we change the number of Daniell filters to 2 and set the order of the first.

Figure 4-9 displays a plot of (2π ) − 1 times the periodogram of the annual sunspot numbers (obtained by opening the project SUNSPOTS.TSM in ITSM and selecting Spectrum>Periodogram)
Figure 4-9 displays a plot of (2π ) − 1 times the periodogram of the annual sunspot numbers (obtained by opening the project SUNSPOTS.TSM in ITSM and selecting Spectrum>Periodogram)

Time-Invariant Linear Filters

The following proposition shows how the spectral density of the output of a TLF is related to the spectral density of the input—a fundamental result in the study of time-invariant linear filters. For example, if the input process {Xt} with spectral density fX is driven sequentially by two absolutely summable TLFs 1 and 2, then the net effect is the same as for a TLF with transfer function ψ1. e−iλ and the spectral density of the output process. In Figure 4-13, the transfer function of the ideal low-pass filter with wc=π/4 is plotted with the approximations (q).

The Spectral Density of an ARMA Process

  • Rational Spectral Density Estimation

An alternative to the spectral density estimator of Definition 4.2.2 is the estimator obtained by fitting an ARMA model to the data and then computing the spectral density of the fitted model. Use ITSM to plot the spectral density of the fitted model and find the frequency at which it reaches its maximum value. 4.8 (a) Use ITSM to satisfactorily calculate and plot the spectral density of the stationary series {Xt}.

Figure 4-14 shows the spectral density, found from the Spectrum>Model option of ITSM, for the model (3.2.20) fitted to the mean-corrected sunspot series
Figure 4-14 shows the spectral density, found from the Spectrum>Model option of ITSM, for the model (3.2.20) fitted to the mean-corrected sunspot series

Preliminary Estimation

  • Yule–Walker Estimation
  • Burg’s Algorithm
  • The Innovations Algorithm
  • The Hannan–Rissanen Algorithm

The Burg estimate of the white noise variance is the minimum value σp(B)2 found in the determination of φpp(B). The largest sampling distribution of the estimated coefficients for the Burg estimators of the coefficients of an AR(p) process is the same as for the Yule–Walker estimators, namely N. The fitting of AR models using Burg's algorithm in the program ITSM is completely analogous to using the Yule-Walker equations.

Maximum Likelihood Estimation

The rationale for using maximum Gaussian likelihood estimators of the ARMA coefficients is that the distribution of the estimators in a large sample is the same for {Zt} ∼ IID. In Section 5.1, we presented the minimization of the AICC value as the main criterion for order selection of spandq. For any fixed pandq, it is clear that AICC is minimized if φpandθq are vectors that minimize −2 lnL(φp,θq,S(φp,θq)/n), i.e. maximum likelihood estimator.

Diagnostic Checking

  • The Sample ACF of the Residuals
  • Tests for Randomness of the Residuals

The following diagnostic checks are all based on the expected properties of the residuals or rescaled residuals, assuming that the fitted model is correct and that {Zt} ∼IID. If the fitted model is suitable, then the plot of rescaled residuals is Rˆt,t = 1,. should resemble that of a series of white noise with variance one. To correct for this, the limits ±1.96/√n must be changed to give a more accurate test, as in Box and Pierce (1970) and Brockwell and Davis (1991), section 9.4.) The sample ACF and PACF of the residuals and the limits ±1.96/√n can be viewed by pressing the second green button (Plot ACF/PACF of residuals) at the top of the ITSM window.

Forecasting

Assuming that innovations{Zt} are normally distributed, an approximate 95% prediction interval for X64 is given by. The mean squared errors of prediction, as calculated in Section 3.3 and the example above, are based on the assumption that the fitted model is in fact the true model for the data. Xn, then the one-step-ahead prediction Xn+1isφˆXn, which has the mean squared error.

Table 5.1 Forecasts of the next seven observations of the overshort data of Example 3.2.8 using model (5.4.1)
Table 5.1 Forecasts of the next seven observations of the overshort data of Example 3.2.8 using model (5.4.1)

Order Selection

  • The FPE Criterion
  • The AICC Criterion

The table also shows the values ​​of the maximum likelihood estimates of σ2 for the same values ​​of p. This was designed to be an approximately unbiased estimate of the Kullback-Leibler index of the fitted model relative to the true model (defined below). Assuming that the data are generated by an AR(2) model, derive the PACF estimates for all lags≥1.

Table 5.2 σ ˆ p 2 and FPE p for AR(p) models fitted to the lake data
Table 5.2 σ ˆ p 2 and FPE p for AR(p) models fitted to the lake data

ARIMA Models for Nonstationary Time Series

A distinguishing feature of the data suggesting the appropriateness of an ARIMA model is the slowly decaying autocorrelation function for positive samples in Figure 6.2. Instead of differentiating the sequence in Figure 6-1, we could proceed more directly by trying to fit an AR(2) process, as suggested by the example PACF of the original sequence in Figure 6-3. If we were simply given the data shown in Figure 6-7, with no indication of the model it was generated from, the slow-damped sinusoidal ACF sample with period 6 would suggest trying to make the ACF sample decay faster by the operator ( 6.1.7) metr = 1 and ω = π/3, i.e.,.

Identification Techniques

Trend and seasonality are usually detected by examining a graph of the (preferably transformed) series. Therefore, we minimize one of the model selection criteria discussed in Section 5.5 in order to select values ​​of pandq. X130}denotes the string obtained from the red wine data of Example 1.1.1 after counting the natural logarithms, differentiating at a lag of 12, and subtracting the mean (0.0681) of the differences.

Unit Roots in Time Series Models

  • Unit Roots in Autoregressions
  • Unit Roots in Moving Averages

Therefore, testing the hypothesis of a unit root in 1 of the autoregressive polynomial is equivalent to testing φ1∗=0. Example 6.3.1 Consider testing the time series of Example 6.1.1 (see Figure 6-1) for the presence of a unit root in the autoregressive operator. Consequently, testing for a unit root in the moving average polynomial is equivalent to testing that the time series is overdifferentiated.

Forecasting ARIMA Models

  • The Forecast Function

This can be done by applying the operator Pn to each side of (6.4.1) (me=n+h) and using the linearity of Pn to obtain. The solution of (6.4.7) is well known from the theory of linear difference equations (see Brockwell and Davis (1991), Section 3.6). To find the general solution of these inhomogeneous linear difference equations, it suffices (see Brockwell and Davis (1991), Section 3.6) to find a particular solution of (6.4.10) and then add to it the general solution of the same equations on the right is equal to zero.

Seasonal ARIMA Models

  • Forecasting SARIMA Processes

In each of the examples, the 12 series corresponding to the different months are uncorrelated. For the numerical values ​​of the predictors and prediction limits, right-click on the chart and then click Info. Table 6.1 shows the predictors and standard deviations of the prediction errors under both models (6.5.8) and (6.5.9) for the Accidental Deaths series.

Table 6.1 Predicted values of the Accidental Deaths series for t = 73, . . . , 78, the standard deviations σ t of the prediction errors, and the corresponding observed values of X t f r the same period
Table 6.1 Predicted values of the Accidental Deaths series for t = 73, . . . , 78, the standard deviations σ t of the prediction errors, and the corresponding observed values of X t f r the same period

Regression with ARMA Errors

  • OLS and GLS Estimation
  • ML Estimation

It can be shown that the GLS estimator is the best linear unbiased estimator of β, i.e. for any k-dimensional vector cand for any unbiased estimator βˆ of β which is a linear function of the observations Y1,. The function ℓ(β, φ,θ) can be expressed in terms of the observations {Yt} and the parameters β, φ and θ using the innovation algorithm (see Section 3.3) and numerically minimized to obtain the maximum likelihood estimators, β, ˆ φ and ˆ θ. Assuming that {Wt} is iid noise, the estimated variance of the OLS estimator for β is γˆY.

Gambar

Table 3.1 X ˆ n+1 for the ARMA(2,3) Process of Example 3.3.4
Table 3.2 h-step predictors for the ARMA(2,3)
Figure 4-9 displays a plot of (2π ) − 1 times the periodogram of the annual sunspot numbers (obtained by opening the project SUNSPOTS.TSM in ITSM and selecting Spectrum&gt;Periodogram)
Figure 4-14 shows the spectral density, found from the Spectrum&gt;Model option of ITSM, for the model (3.2.20) fitted to the mean-corrected sunspot series
+5

Referensi

Dokumen terkait

Journal of the Department of Agriculture, Journal of the Department of Agriculture, Western Australia, Series 4 Western Australia, Series 4 Volume 3 Number 5 1962 Article 13