A5.4 RECOMMENDED READING
10.3 MECHANICAL STRESS TESTING
function, the stress scenario change in portfolio value is a normally distributed random variable with meanX1tR1t+X2tµcand varianceX2tΣcXT2t, whereX1t andX2t are the position vectors corresponding to the two types of risk factors. Once the joint distribution of the risk factors is taken into account, our stress test thus produces a distribution of scenario loss values, not just a single loss value. If we wish, we can then focus on one of the percentile points of the scenario loss distribution, in which case our output can be interpreted as a stress VaR: the likely worst outcome at a chosen confidence level.
Alternatively, we can focus on the mean of the conditional loss distribution, in which case our stress loss isX1tR1t+X2tΣ21Σ−111R1t. This expected loss differs from the expected loss we get under traditional stress testing because it takes account of the correlations between different risk factors: under a traditional stress test, we would stress the stressed risk factors and take the other risk factors to be unaltered, and therefore get an expected loss ofX1tR1t. The results reported by Kupiec (1999, p. 12) indicate that this traditional loss measure performs very poorly in backtesting exercises, and that his proposed new expected loss measure fares better.12
us our worst-case scenario, and the maximum loss (ML) is equal to the current value of our portfolio minus the portfolio value under this worst-case scenario.14
Factor push analysis is relatively easy to program, at least for simple positions, and is good for showing up where and how we are most vulnerable. It does not require particularly restrictive assumptions and can be used in conjunction with a non-linear P/L function (e.g., such as a P/L function that is quadratic in underlying risk factors), and can be modified to accommodate whatever correlation assumptions we care to make (e.g., using a Choleski decomposition, as in Studer and L¨uthi (1997)). Furthermore, in common with measures such as those generated by the SPAN risk measurement system or by worst-case scenario analysis, not to mention ETL, the ML risk measure also has the major theoretical attraction of being coherent.15
If we are prepared to make certain additional assumptions, factor push can also tell us something about the likelihood of the losses concerned.16If we have just one risk factor and make appropriate parametric assumptions (e.g., such as normality),αenables us to infer the tail probability, and the maximum loss on the boundary of the confidence region is our VaR.αcan still tell us something about the probabilities when we have multiple risk factors and make appropriate parametric assump- tions (e.g., multivariate normality, given a set of correlation assumptions), but the analysis is more complicated and we find that the ML on the boundary of the confidence region is a conservative estimate (i.e., an overestimate) of our VaR (see Studer (1999, p. 38)).17However, we can also adjust theα-value to make the ML equal to our VaR, and if we make this adjustment, we can interpret ML as a VaR and use the factor push approach as an alternative way of estimating VaR.18
However, FP rests on the not-always-appropriate assumption that the maximum loss occurs at extreme values of the underlying risk variables (i.e., it assumes that the maximum loss occurs when the underlying factor moves up or down byαtimes its standard deviation).19 Yet this assumption is only appropriate for certain relatively simple types of portfolio (e.g., uncomplicated equity or FX positions) in which the position value is a monotonic function of a (typically, single) risk factor, and there are many other instruments for which this assumption does not hold. A good example is a long
14To do factor push analysis properly, we should also take account of relevant constraints, such as zero-arbitrage conditions, and we might also want to work with mapped positions, delta–gamma approximations, and so on. However, none of these modifications alters the basic nature of factor push analysis.
15For more on the SPAN and worst-case scenario analysis approaches, see Box 2.4.
16There are various ways we can get probability figures out of stress tests. Besides the Wilson–Studer–L¨uthi–Rouvinez approach discussed in the text and the Berkowitz coherent stress test approach discussed in Box 10.2, we can also use the dominant factor method suggested by Bouchaud and Potters (2000). This approach is based on a dominant factor approximation that is ideally suited to handling non-normal risk factors, and the approximation can be fitted to any of a large number of fat-tailed distributions to estimate the probabilities concerned.
17This is the same problem raised by Britten-Jones and Schaefer (1999) in their critique of Wilson’s QP approach (e.g., Wilson (1996)), discussed earlier in the Appendix to Chapter 5 — namely, that identifying outcomes as being inside or outside a confidence region does not tell us the probability of those outcomes, with the result that Wilson’s ML (or capital- at-risk, to use his terminology) is an overestimate of VaR.
18An alternative method of obtaining VaR estimates from a factor push methodology is provided by Rouvinez (1997).
Suppose we start by assuming that the changes in the risk factors are independent normal. The sum of the squares of these changes is then distributed as a chi-squared, and (assuming a zero mean) the VaR is equal to the relevant quantile of the chi-squared distribution, sayβ, times the portfolio standard deviation. Sinceβis generally bigger than the standard normal quantileα, this approach generally leads to bigger VaR estimates than, say, a delta–normal approach. For more on this method, see Rouvinez (1997, pp. 60–62).
19One other problem with mechanical stress tests is that the largest losses might come from conjunctions of events that will not in fact occur. For example, we might find that the maximum loss occurs when there is a large fall in the stock market associated with a large fall in stock market volatility. Since such combinations cannot plausibly occur, the losses associated with them are not really worth worrying about. In carrying out mechanical stress tests, we need to screen for such implausible combinations and remove them from consideration.
straddle — a combination of a long call and a long put written against the same underlying asset. The profit on a straddle depends on movements in the underlying variable, either up or down — the greater the movement, the bigger the profit — and the maximum loss on a straddle actually occurs when the underlying price does not move at all. A na¨ıve factor push methodology applied to a straddle position would then give a misleading picture of maximum loss, since it would assume that the maximum loss occurred in exactly those circumstances where it would in fact make its maximum profit! There is also good reason to believe that this type of problem is quite serious in practice. To quote Tanya Beder:
In our experience, portfolios do not necessarily produce their greatest losses during extreme market moves . . . portfolios often possess Achilles’ heels that require only small moves or changes between instruments or markets to produce significant losses. Stress testing extreme market moves will do little to reveal the greatest risk of loss for such portfolios. Furthermore, a review of a portfolio’s expected behavior over time often reveals that the same stress test that indicates a small impact today indicates embedded land mines with a large impact during future periods. This trait is particularly true of options-based portfolios that change characteristics because of time rather than because of changes in the components of the portfolio.
(Beder (1995a, p. 18)) When using factor push, we first need to satisfy ourselves that our portfolio suffers its maximum loss when the risk factors make their biggest moves.
10.3.2 Maximum Loss Optimisation
The solution to this latter problem is to search over the losses that occur for intermediate as well as extreme values of the risk variables. This procedure is known as maximum loss optimisation.20 Maximum loss optimisation is essentially the same as factor push analysis, except for the fact that it also searches over intermediate as well as extreme values of the risk variables. There are therefore more computations involved, and MLO will take longer if there are many risk factors involved and a lot of intermediate values to search over. Consequently, the choice between FP and MLO depends on the payoff characteristics of our portfolio. If the portfolio is made up of straightforward positions, each of which takes its maximum loss at extreme values of the underlying risk factors, then FP and MLO will deliver exactly the same results and we may as well use the computationally simpler FP approach. However, if the portfolio has less straightforward payoff characteristics (e.g., as with some options positions), it may make sense to use MLO. MLO can also help pick up interactions between different risks that we might otherwise have overlooked, and this can be very useful for more complex portfolios whose risks might interact in unexpected ways. As a general rule, if the portfolio is complex or has significant non-linear derivatives positions, it is best to play safe and go for MLO.21
20For more on maximum loss optimisation, and on the difference between it and factor push analysis, see Frain and Meegan (1996, pp. 16–18).
21The actual calculations can be done using a variety of alternative approaches. The most obvious approach is a grid search, in which we discretise the possible movements in risk factors and search over the relevantn-dimensional grid to find that combination of risk factor changes that maximises our loss. However, we can also use simulation methods, or numerical methods such as a multidimensional simplex method, or hybrid methods such as simulated annealing, all of which are discussed further in Breuer and Krenn (2000, pp. 10–13). These methods are also capable of considerable refinement to increase accuracy and/or reduce computation time.
Box 10.5 CrashMetrics
CrashMetrics is a form of maximum loss optimisation that is designed to estimate worst-case losses (see Hua and Wilmott (1997) or Wilmott (2000, ch. 58)). If we have a long position in a single option, the option P/L can be approximated by a second-order Taylor approximation (see Equation (8.12)), and the maximum possible loss isδ2/(2γ) and occurs when the change in underlying value is−δ/γ. We can get comparable expressions for multi-option portfolios provided we can model the relationship — and more particularly, the variance–covariance matrix — between the underlying variables. Hua and Wilmott suggest that we do so by modelling how they move in a crash relative to a market benchmark, and they estimate the coefficients involved using data on extreme market moves. This approach can also be extended to deal with the other Greek factors, changes in bid–ask spreads, and so on.