• Tidak ada hasil yang ditemukan

A5.2 DELTA–GAMMA APPROACHES

A5.2.1 The Delta–Gamma Approximation

An obvious alternative is to take a second-order approach — to accommodate non-linearity by tak- ing a second-order Taylor series approximation rather than a first-order one. This second-order approximation is usually known in finance as a delta–gamma approximation, and taking such an approximation for a standard European call option gives us the following:

cδS+(γ /2)(S)2 (A5.4)

This second-order approximation takes account of the gamma risk that the delta–normal approach ignores (cf. Equation (A5.1)).11 The improvement over the delta–normal approach is particularly marked when the option has a high (positive or negative) gamma (e.g., as would be the case with at-the- money options that are close to maturity).12However, once we get into second-order approximations the problem of estimating VaR becomes much more difficult, as we now have the squared or quadratic terms to deal with.13

A5.2.2 The Delta–Gamma Normal Approach

One tempting but flawed response to this problem is to use a delta–gamma normal approach, the essence of which is to regard the extra risk factor (S)2 as equivalent to another independently distributed normal variable to be treated in the same way as the first one (i.e.,S). We can then regard the change in option value as if driven by two risk factors,SandU:

cδS+(γ /2)U (A5.5)

whereUequals (S)2. When estimating VaR, we treat the option as equivalent to a portfolio that is linear in two normal risk factors. The option VaR is therefore equal to−αcltimes the ‘portfolio’

11However, as is clear from the Black–Scholes equation, both delta–normal and delta–gamma approximations can also run into problems from other sources of risk. Even if the underlying priceSdoes not change, a change in expected volatility will lead to a change in the price of the option and a corresponding change in the option’s VaR: this is the infamous problem of vega risk, or the volatility of volatility. Similarly, the option’s value will also change in response to a change in the interest rate (the rho effect) and in response to the passing of time (the theta effect). In principle, these effects are not too difficult to handle because they do not involve higher-order (e.g., squared) terms, and we can tack these additional terms onto the basic delta–normal or delta–gamma approximations if we wish to. However, vega in particular can be notoriously badly behaved.

12There can also be some difficult problems lurking beneath the surface here. (1) The second-order approximation can still be inaccurate even with simple instruments such as vanilla calls. Estrella (1996, p. 360) points out that the power series for the Black–Scholes approximation formula does not always converge, and even when it does, we sometimes need very high-order approximations to obtain results of sufficient accuracy to be useful. However, Moriet al.(1996, p. 9) and Schachter (1995) argue on the basis of plausible parameter simulations that Estrella is unduly pessimistic about the usefulness of Taylor series approximations, but even they do not dispute Estrella’s basic warning that results based on Taylor series approximations can be unreliable. (2) We might be dealing with instruments with more complex payoff functions than simple calls, and their payoff profiles might make second-order approximations very inaccurate (e.g., as is potentially the case with options such as knockouts or range forwards) or just intractable (as is apparently the case with the mortgage-backed securities considered by Jakobsen (1996)).

13Nonetheless, one way to proceed is to estimate the moments of the empirical P/L or return distribution, use these to fit the empirical distribution to a fairly general class of distributions such as Pearson family or Johnson distributions, and then infer the VaR from the fitted distribution (see, e.g., Zangari (1996c), Jahelet al.(1999)). This type of approach can easily accommodate approximations involving the Greeks, as well as other features such as stochastic volatility (Jahelet al.(1999)).

We will consider such approaches in Section A5.2.4.

standard deviation, where the latter is found by applying the usual formula, i.e.:

σp =

δ2σ2+(γ /2)2σU2 =

δ2σ2+(1/4)γ2σ4 (A5.6) whereσ, as before, is the volatility of the stock andσUis the volatility of the hypothetical instrument U. Consequently, the option VaR is:

VaRoption= −αclσpS= −αclσS

δ2+(1/4)γ2σ2 (A5.7) The delta–gamma normal approach thus salvages tractability by forcing the model back into the confines of linear normality, so that we can then apply a modified delta–normal approach to it.

Unfortunately, it also suffers from a glaring logical problem:Sand (S)2cannotboth be normal.

IfSis normal, then (S)2 is chi-squared andc, as given by Equation (A5.5), is the sum of a normal and a chi-squared. The delta–gamma normal approach consequently achieves tractability by compromising its logical coherence, and it can lead to seriously flawed estimates of VaR.14 A5.2.3 Wilson’s Delta–Gamma Approach

An alternative approach was proposed by Wilson (1994b, 1996). This procedure goes back to the definition of VaR as the maximum possible loss with a given level of probability. Wilson suggests that this definition implies that the VaR is the solution to a corresponding optimisation problem, and his proposal is that we estimate VaR by solving this problem.15In the case of a single call option, the VaR can be formally defined as the solution to the following problem:16

VaR=max [−c], subject to (S)2σS−2α2cl

{S} (A5.8)

In words, the VaR is the maximum loss (i.e., the maximum value of−[c]) subject to the constraint that underlying price changes occur within a certain confidence interval. The bigger the chosen confidence level, the bigger αcl and the bigger the permitted maximum price change S.17 In the present context we also take the option price change cto be proxied by its delta–gamma approximation:

cδS+γ(S)2/2 (A5.9)

14A good example is the option position just considered, since the delta–gamma estimate of VaR is actuallyworsethan the delta–normal one. Equation (A5.7) implies that the delta–gamma normal procedure gives an estimate of VaR that is even higher than the delta–normal estimate, and the delta–normal estimate is already too big. (Why? If the underlying stock price falls, the corresponding fall in the option price is cushioned by the gamma term. The true VaR of the option position is then less than would be predicted by a linear delta approximation that ignores the gamma effect. Hence, the delta–normal approach overestimates the option’s VaR.) Since the delta–normal estimate is already too high, the delta–gamma one must be even higher. In this particular case, we would have a better VaR estimate if we ignored the gamma term completely — a good example that shows how treacherous the delta–gamma normal approach can be.

15Wilson himself calls his risk measure ‘capital at risk’ rather than value at risk, but it is clear from his discussion that he sees ‘capital at risk’ as conceptually similar to VaR and I prefer to use the more conventional term. However, there are in fact major differences between the VaR (or whatever else we call it) implied by a quadratic programming approach (of which Wilson’s is an example) and conventional or ‘true’ VaR (see Britten-Jones and Schaefer (1999, appendix)), and we will come back to these differences a little later in the text.

16See, e.g., Wilson (1996, pp. 205–207).

17In the case of our call option, the constraint could equally have been written in the more intuitive form (S)σS1αcl. However, more generally, the maximum loss could occur for positive or negative values ofSdepending on the particular position. Writing the constraint in squared form is a convenient way to capture both positive and negative values ofSin a single constraint.

In general, this approach allows for the maximum loss to occur with (S)2taking any value in the range permitted by the constraint, i.e.:

0≤(S)2α2clσS2 (A5.10)

which in turn implies that:

αclσSS≤ −αclσS (A5.11)

However, in this case, we also know that the maximum loss occurs whenStakes one or other of its permitted extreme values, i.e., whereS =αclσSorS= −αclσS. We therefore substitute each of these two values ofSinto Equation (A5.11) and the VaR is the bigger of the two losses.

The Wilson approach also applies to portfolios with more than one instrument, but in doing so it unfortunately loses its easiness. In this more general case, the VaR is given by the solution to the following quadratic programming (QP) optimisation problem:

VaR=max−[␦T∆S+∆STS/2], subject to∆STΣ−1∆Sαcl2 (A5.12) {S}

where␦is a vector of deltas,␥is a matrix of gamma and cross-gamma terms, the superscript ‘T’

indicates a transpose, and we again use bold face to represent the relevant matrices (Wilson (1996, p. 207)). This problem is a standard quadratic programming problem, and one way to handle it is to rewrite the problem in Lagrangian form:

L = −[␦T∆S+∆STS/2]+λ[∆STΣ−1∆Sαcl] (A5.13) We then differentiateL with respect to each element of∆Sto arrive at the following set of Kuhn–

Tucker conditions, which describe the solution:

[−␥−λΣ−1]∆S=␦

∆STΣ1∆Sαcl2 (A5.14) λ ∆STΣ−1∆Sαcl2

=0 and λ≥0

whereλis the Lagrange multiplier associated with the constraint, which indicates how much the VaR will rise as we increase the confidence level (Wilson (1996, p. 208)). The solution, ∆S*, is then:

∆S*=A(λ)−1␦ (A5.15)

whereA(λ)= −[␥+λΣ−1]. Solving for∆S* therefore requires that we search over each possible λvalue and invert theA(λ) matrix for each such value. We also have to check which solutions satisfy our constraint and eliminate those that do not satisfy it. In so doing, we build up a set of potential

∆S* solutions that satisfy our constraint, each contingent on a particularλvalue, and then plug each of them into Equation (A5.15) to find the one that maximisesL.18

18That said, implementing this procedure is not easy. We have to invert bigger and bigger matrices as the number of risk factors gets larger, and this can lead to computational problems (e.g., matrices failing to invert). We can ameliorate these problems if we are prepared to make some simplifying assumptions, and one useful simplification is to assume that theA(λ) matrix is diagonal. If we make this assumption Equation (A5.15) gives us closed-form solutions forS* in terms ofλwithout any need to worry about matrix inversions. Computations become much faster, and the gain in speed is particularly large when we have a bigA(λ) matrix (Wilson (1996, p. 210)). But even this improved procedure can be tedious, and the diagonal A(λ) simplification still does not give us the convenience of a closed-form solution for VaR.

Unfortunately, this QP approach also suffers from a major conceptual flaw. Britten-Jones and Schaefer (1999, pp. 184–187) point out that there is a subtle but important difference between the ‘true’ VaR and the QP VaR: the ‘true’ VaR is predicated on a confidence region defined over portfolio value changes, whilst the QP VaR is predicated on a confidence region defined over (typically multidimensional) factor realisations. There are several problems with the latter approach, but perhaps the most serious is that it is generally not possible to use confidence regions defined over factors to make inferences about functions of those factors. To quote Britten-Jones and Schaefer:

Simply because a point lies within a 95% confidence region does not mean that it has a 95% chance of occurrence. A point may lie within some 95% region, have a negligible chance of occurring and have a massive loss associated with it. The size of this loss does not give any indication of the true VaR. In short the QP approach is conceptually flawed and will give erroneous results under all but special situations where it will happen to coincide with the correct answer.

(Britten-Jones and Schaefer (1999, p. 186)) Britten-Jones and Schaefer go on to prove that the QP VaR will, in general, exceed the true VaR, but the extent of the overstatement will depend on the probability distribution from which the P/L is generated.

So, in the end, all we have is a risk measure that generally overestimates the VaR by an amount that varies from one situation to another. It is therefore not too surprising that empirical evidence suggests the QP approach can give very inaccurate estimates of ‘true’ VaR, and is sometimes even less accurate than the delta–gamma normal approach (Pritsker (1997, p. 231)).

A5.2.4 Other Delta–Gamma Approaches

Fortunately, there are other delta–gamma approaches that work much better, and a large number of alternative approaches have been suggested:

r

Zangari (1996a,c) estimates the moments of the portfolio P/L process and matches these against a Johnson family distribution, and obtains the VaR from that distribution with the same moments as the portfolio P/L. This method is fast and capable of generating considerably more accurate VaR estimates than a delta approach. Jahelet al.(1999) propose a somewhat similar approach, but one that uses a characteristic function to estimate the moments of the underlying vector process.

r

Fallon (1996) takes a second-order Taylor series approximation of the portfolio P/L about the current market prices, and rearranges this expression to obtain a quadratic function of a multi- normal distribution. He then uses standard methods to calculate the moments of the approximated P/L distribution, and appropriate approximation methods to obtain the quantiles and, hence, the VaR, of this distribution. He also found that the Cornish–Fisher approach gave the best quantile approximations, and was both reliable and accurate.

r

Rouvinez (1997) takes a quadratic approximation to the portfolio P/L and estimates its moments, and then obtains some bounds for the VaR expressed in terms of these moments. He goes on to use the characteristic function to estimate the value of the cdf, and then inverts this to obtain a VaR estimate. His methods are also fast and straightforward to implement.

r

C´ardenas et al. (1997) and Britten-Jones and Schaefer (1999) obtain the usual delta–gamma approximation for portfolio P/L, estimate the second and higher moments of the P/L distribution using orthogonalisation methods, and then read off the VaR from appropriate chi-squared tables.

These methods are also fairly fast.

r

Studer (1999) and Mina (2001) describe procedures by which the quadratic approximation is estimated by least squares methods: we select a set of scenarios, value the portfolio in each

of these scenarios to produce ‘true’ P/Ls, and then choose the delta and gamma parameters to provide a best-fitting approximation to the ‘true’ P/L. These methods produce fairly accurate and fast delta–gamma approximations to ‘true’ VaR.

r

Albaneseet al.(2001) use a ‘fast convolution’ or fast Fourier transform method that enables the user to obtain accurate VaR estimates very quickly. This method also gives the portfolio’s marginal VaRs, and is therefore very useful for risk decomposition.

r

Feuerverger and Wong (2000) propose a saddlepoint approximation method that uses formulas de- rived from the moment-generating function to obtain highly accurate delta–gamma approximations for VaR.

Box A5.2 A Duration–convexity Approximation to Bond Portfolios

The second-order approximation approach used to handle non-linearity in options positions can also be used to handle non-linearity in bonds. Suppose we take a second-order approximation of a bond’s price–yield relationship:

P(y+y)≈P(y)+(dP/dy)y+(1/2)(d2P/dy2)y2 We know from standard fixed-income theory that:

dP/dy= −DmP and d2P/dy2=CP

whereDm is the bond’s modified duration andC its convexity (see Tuckman (1995, pp. 123, 126)). The percentage change in bond price is therefore:

P/P≈ −Dmy+(1/2)C(y)2

which is the second-order approximation for bond prices corresponding to the delta–gamma approximation for option prices given by Equation (A5.4).