• Tidak ada hasil yang ditemukan

ESTIMATING CONFIDENCE INTERVALS FOR HISTORICAL SIMULATION V A R AND ETL

A3.3 RECOMMENDED READING

4.3 ESTIMATING CONFIDENCE INTERVALS FOR HISTORICAL SIMULATION V A R AND ETL

The methods considered so far are good for giving point estimates of VaR or ETL, but they don’t give us any indication of the precision of these estimates or any indication of VaR or ETL confi- dence intervals. However, there are a number of methods to get around this limitation and produce confidence intervals for our risk estimates.

4.3.1 A Quantile Standard Error Approach to the Estimation of Confidence Intervals for HS VaR and ETL

One solution is to apply the theory of quantile standard errors. If a quantile (or VaR) estimatexhas a density function with value f, a cumulative density function with valuep, and we have a sample of sizen, then the approximate standard error of our quantile estimate is:

se(x)=

p(1−p)/(n f2) (4.2)

(see, e.g., Kendall and Stuart (1972, pp. 251–252)). We could therefore estimate confidence intervals for our VaR estimates in the usual textbook way, and the 95% confidence interval for our VaR would be:

[x+1.96 se(x),x−1.96 se(x)] (4.3)

This approach is easy to implement and plausible with large sample sizes. On the other hand:

r

Results will be sensitive to the value off, the relative frequency, whose estimated value depends a great deal on the bin size: our results are potentially sensitive to the value of what is essentially an ancillary and to some extent, arbitrary, parameter.

r

The quantile standard error approach relies on asymptotic (i.e., limiting) theory, and can be unre- liable with small sample sizes.

r

This approach produces symmetric confidence intervals that can be misleading for VaRs at high confidence levels, where the ‘true’ confidence intervals are necessarily asymmetric because of the increasing sparsity of our data as we go further out into the tail.

4.3.2 An Order Statistics Approach to the Estimation of Confidence Intervals for HS VaR and ETL

An alternative is to estimate VaR and ETL confidence intervals using the theory of order statistics, explained in Tool No. 1: Estimating VaR and ETL Using Order Statistics. This approach gives us, not just a VaR (or ETL) estimate, but a complete VaR distribution function from which we can read off a VaR confidence interval. The median of this distribution function also gives us an alternative point estimate of our VaR. This approach is easy to program and very general in its application. Relative to the previous approach, it also has the advantages of not relying on asymptotic theory (i.e., is reliable with small samples) and of being less dependent on ancillary assumptions.

Applied to our earlier P/L data, the OS approach gives us estimates (obtained using the IMRM Toolbox ‘hsvarpdfperc’ function) of the 2.5% and 97.5% points of the VaR distribution function — that is, the bounds of the 95% confidence interval for our VaR — of 1.011 and 1.666. This tells us we can be 95% confident that the ‘true’ VaR lies in the range [1.011,1.666]. The median of the distribu- tion — the 50th percentile — is 1.441, which is fairly close to our earlier VaR point estimate, 1.475.

The corresponding points of the ETL distribution function can be obtained (using the ‘hsetldfperc’

function) by mapping from the VaR to the ETL: we take a point on the VaR distribution function, and estimate the corresponding point on the ETL distribution function. Doing this gives us an estimated 95% confidence interval of [1.506,2.022] and an ETL median of 1.731, and the latter is not too far away from our earlier ETL point estimate of 1.782.3

4.3.3 A Bootstrap Approach to the Estimation of Confidence Intervals for HS VaR and ETL

A third approach is the bootstrap, covered in Tool No. 3: The Bootstrap. A bootstrap procedure in- volves resampling, with replacement, from our existing data set. If we have a data set ofnobservations, we create a new data set by takingndrawings, each taken from the whole of the original data set. Each new data set created in this way gives us a new VaR estimate. We then create a large number of such data sets and estimate the VaR of each. The resulting VaR distribution function enables us to obtain estimates of the confidence interval for our VaR. The bootstrap is very intuitive and easy to apply.

For example, if we take 1,000 bootstrapped samples from our P/L data set, estimate the VaR of each, and then plot them, we get the histogram shown in Figure 4.3. (By the way, the gaps and unevenness of the histogram reflect the small initial sample size (i.e., 100), rather than the bootstrap itself.) The 95% confidence interval for our VaR is [1.095,1.624], and the median of the distribution is 1.321.

00.6 50 100 150 200 250 300 350

0.8 1 1.2 1.4 1.6 1.8 2 2.2

VaR

Frequency

Figure 4.3 Bootstrapped VaR.

Note: Results obtained using the ‘bootstrapvarfigure’ function, and the same hypothetical data as in earlier figures.

3Naturally, the order statistics approach can be combined with more sophisticated non-parametric density estimation approaches. Instead of applying the OS theory to the histogram or na¨ıve estimator, we could apply it to a more sophisticated kernel estimator, and thereby extract more information from our data. This approach has a lot of merit and is developed in detail by Butler and Schachter (1998). It is, however, also less transparent, and I prefer to stick with histograms if only for expository purposes.

Table 4.1 Confidence intervals for non-parametric VaR and ETL Approach Lower bound Upper bound % Range

VaR

Order statistics 1.011 1.666 44.4%

Bootstrap 1.095 1.624 40.0%

ETL

Order statistics 1.506 2.022 29%

Bootstrap 1.366 1.997 35.4%

Note: The VaR and ETL are based on a 95% confidence level, and the range is estimated as the difference between the upper and lower bounds, divided by the VaR or ETL point estimate.

1 1.25 1.5 1.75 2

0 20 40 60 80 120

ETL

Frequency

Figure 4.4 Bootstrapped ETL.

Note: Results obtained using the ‘bootstrapetlfigure’ function, and the same hypothetical data as in earlier figures.

We can also use the bootstrap to estimate ETLs in much the same way: for each new resampled data set, we estimate the VaR, and then estimate the ETL as the average of losses in excess of VaR.

Doing this a large number of times gives us a large number of ETL estimates, and we can plot them in the same way as the VaR estimates. The plot of bootstrapped ETL values is shown in Figure 4.4, and it is more well-behaved than the VaR histogram in the last figure because the ETL is an average of tail VaRs. The 95% confidence interval for our ETL is [1.366,1.997].

It is also interesting to compare the VaR and ETL confidence intervals obtained by the two methods.

These are summarised in Table 4.1, with the middle two columns giving the bounds of the 95%

confidence interval, and the last column giving the difference between the two bounds standardised in terms of the relevant VaR or ETL point estimate. As we can see, the OS and bootstrap approaches

give fairly similar results — a finding that is quite striking when one considers the very low sample size of only 100 observations.