• Tidak ada hasil yang ditemukan

OUTLINE OF THE BOOK

1.3 VALUE AT RISK

1.3.1 The Origin and Development of VaR

In the late 1970s and 1980s, a number of major financial institutions started work on internal models to measure and aggregate risks across the institution as a whole. They started work on these models in the first instance for their own internal risk management purposes — as firms became more complex,

12This problem is especially acute for gamma risk. As one risk manager noted:

On most option desks, gamma is a local measure designed for very small moves up and down [in the underlying price].

You can have zero gamma but have the firm blow up if you have a 10% move in the market.

(Richard Bookstaber, quoted in Chew (1994, p. 65)) The solution, in part, is to adopt a wider perspective. To quote Bookstaber again:

The key for looking at gamma risks on a global basis is to have a wide angle lens to look for the potential risks. One, two or three standard deviation stress tests are just not enough. The crash of 1987 was a 20 standard deviation event — if you had used a three standard deviation move [to assess vulnerability] you would have completely missed it.

(Bookstaber, quoted in Chew (1994, pp. 65–66))

13Quoted in Chew (1994, p. 66).

it was becoming increasingly difficult, but also increasingly important, to be able to aggregate their risks, taking account of how they interact with each other, and firms lacked the methodology to do so.

The best known of these systems is the RiskMetrics system developed by JP Morgan. According to industry legend, this system is said to have originated when the chairman of JP Morgan, Dennis Weatherstone, asked his staff to give him a daily one-page report indicating risk and potential losses over the next 24 hours, across the bank’s entire trading portfolio. This report — the famous ‘4:15 report’ — was to be given to him at 4:15 each day, after the close of trading. In order to meet this demand, the Morgan staff had to develop a system to measure risks across different trading positions, across the whole institution, and also aggregate these risks into a single risk measure. The measure used was value at risk (or VaR), or the maximum likely loss over the next trading day,14 and the VaR was estimated from a system based on standard portfolio theory, using estimates of the standard deviations and correlations between the returns to different traded instruments. While the theory was straightforward, making this system operational involved a huge amount of work: measurement conventions had to be chosen, data sets constructed, statistical assumptions agreed, procedures determined to estimate volatilities and correlations, computing systems established to carry out estimations, and many other practical problems resolved. Developing this methodology took a long time, but by around 1990, the main elements — the data systems, the risk measurement methodology, and the basic mechanics — were all in place and working reasonably well. At that point it was decided to start using the ‘4:15 report’, and it was soon found that the new risk management system had a major positive effect. In particular, it ‘sensitised senior management to risk–return trade-offs and led over time to a much more efficient allocation of risks across the trading businesses’ (Guldimann (2000, p. 57)). The new risk system was highlighted in JP Morgan’s 1993 research conference and aroused a great deal of interest from potential clients who wished to buy or lease it for their own purposes.

Meanwhile, other financial institutions had been working on their own internal models, and VaR software systems were also being developed by specialist companies that concentrated on software but were not in a position to provide data. The resulting systems differed quite considerably from each other. Even where they were based on broadly similar theoretical ideas, there were still considerable differences in terms of subsidiary assumptions, use of data, procedures to estimate volatility and correlation, and many other ‘details’. Besides, not all VaR systems were based on portfolio theory:

some systems were built using historical simulation approaches that estimate VaR from histo- grams of past profit and loss data, and other systems were developed using Monte Carlo simulation techniques.

These firms were keen to encourage their management consultancy businesses, but at the same time they were conscious of the limitations of their own models and wary about giving too many secrets away. Whilst most firms kept their models secret, JP Morgan decided to make its data and basic methodology available so that outside parties could use them to write their own risk management software. Early in 1994, Morgan set up the RiskMetrics unit to do this and the RiskMetrics model — a simplified version of the firm’s own internal model — was completed in eight months. In October that year, Morgan then made its RiskMetrics system and the necessary data freely available on the internet:

outside users could now access the RiskMetrics model and plug their own position data into it.

14One should however note a possible source of confusion. The literature put out by JP Morgan (e.g., such as the RiskMetrics Technical Document) uses the term ‘value at risk’ somewhat idiosyncratically to refer to the maximum likely loss over the next 20 days, and uses the term ‘daily earnings at risk’ (DeaR) to refer to the maximum likely loss over the next day. However, outside Morgan, the term ‘value at risk’ is used as a generic term for the maximum likely loss over the chosen horizon period.

This bold move attracted a lot of attention, and the resulting public debate about the merits of RiskMetrics was useful in raising awareness of VaR and of the issues involved in establishing and operating VaR systems.15In addition, making the RiskMetrics data available gave a major boost to the spread of VaR systems by giving software providers and their clients access to data sets that they were often unable to construct themselves.16 It also encouraged many of the smaller software providers to adopt the RiskMetrics approach or make their own systems compatible with it.

The subsequent adoption of VaR systems was very rapid, first among securities houses and invest- ment banks, and then among commercial banks, pension funds and other financial institutions, and non-financial corporates. Needless to say, the state of the art also improved rapidly. Developers and users became more experienced; the combination of plummeting IT costs and continuing software development meant that systems became more powerful and much faster, and able to perform tasks that were previously not feasible; VaR systems were extended to cover more types of instruments;

and the VaR methodology itself was extended to deal with other types of risk besides the market risks for which VaR systems were first developed, including credit risks, liquidity risks and cash-flow risks.

Box 1.1 Portfolio Theory and VaR

In some respects VaR is a natural progression from earlier portfolio theory (PT). Yet there are also important differences between them:

r

PT interprets risk in terms of the standard deviation of the return, while VaR approaches interpret it in terms of the maximum likely loss. The VaR notion of risk — the VAR itself — is more intuitive and easier for laypeople to grasp.

r

PT presupposes that P/L or returns are normally (or, at best, elliptically) distributed, whereas VaR approaches can accommodate a very wide range of possible distributions. VaR approaches are therefore more general.

r

VaR approaches can be applied to a much broader range of risk problems: PT theory is limited to market risks, while VaR approaches can be applied to credit, liquidity and other risks, as well as to market risks.

r

The variance–covariance approach to VaR has the same theoretical basis as PT — in fact, its theoretical basis is portfolio theory — but other two approaches to VaR (e.g., the historical simulation and simulation approaches) do not. It would therefore be a mistake to regard all VaR approaches as applications (or developments) of portfolio theory.

15A notable example is the exchange between Longerstaey and Zangari (1995) and Lawrence and Robinson (1995a) on the safety or otherwise of RiskMetrics. The various issues covered in this debate — the validity of underlying statistical assumptions, the estimation of volatilities and correlations, and similar issues — go right to the heart of risk measurement, and will be dealt with in more detail in later chapters.

16Morgan continued to develop the RiskMetrics system after its public launch in October 1994. By and large, these developments consisted of expanding data coverage, improving data handling, broadening the instruments covered, and various methodological refinements (see, e.g., the fourth edition of theRiskMetrics Technical Document). In June 1996, Morgan teamed up with Reuters in a partnership to enable Morgan to focus on the risk management system while Reuters handled the data, and in April 1997, Morgan and five other leading banks launched their new CreditMetrics system, which is essentially a variance–covariance approach tailored to credit risk. The RiskMetrics Group was later spun off as a separate company, and the later RiskMetrics work has focused on applying the methodology to corporate risk management, long-run risk management, and other similar areas. For more on these, see the relevant technical documents, e.g., theCorporateMetrics Technical Document(1999), etc.

1.3.2 Attractions of VaR

So what is VaR, and why is it important? The basic concept was nicely described by Linsmeier and Pearson (1996):

Value at risk is a single, summary, statistical measure of possible portfolio losses. Specifically, value at risk is a measure of losses due to ‘normal’ market movements. Losses greater than the value at risk are suffered only with a specified small probability. Subject to the simplifying assumptions used in its calculation, value at risk aggregates all of the risks in a portfolio into a single number suitable for use in the boardroom, reporting to regulators, or disclosure in an annual report. Once one crosses the hurdle of using a statistical measure, the concept of value at risk is straightforward to understand. It is simply a way to describe the magnitude of the likely losses on the portfolio.

(Linsmeier and Pearson (1996, p. 3)) The VaR figure has two important characteristics. The first is that it provides acommonconsistent measure of risk across different positions and risk factors. It enables us to measure the risk associated with a fixed-income position, say, in a way that is comparable to and consistent with a measure of the risk associated with equity positions. VaR provides us with a common risk yardstick, and this yardstick makes it possible for institutions to manage their risks in new ways that were not possible before. The other characteristic of VaR is that it takes account of the correlations between different risk factors. If two risks offset each other, the VaR allows for this offset and tells us that the overall risk is fairly low. If the same two risks don’t offset each other, the VaR takes this into account as well and gives us a higher risk estimate. Clearly, a risk measure that accounts for correlations is essential if we are to be able to handle portfolio risks in a statistically meaningful way.

VaR information can be used in many ways. (1) Senior management can use it to set their overall risk target, and from that determine risk targets and position limits down the line. If they want the firm to increase its risks, they would increase the overall VaR target, and vice versa. (2) Since VaR tells us the maximum amount we are likely to lose, we can use it to determine capital allocation. We can use it to determine capital requirements at the level of the firm, but also right down the line, to the level of the individual investment decision: the riskier the activity, the higher the VaR and the greater the capital requirement. (3) VaR can be very useful for reporting and disclosing purposes, and firms increasingly make a point of reporting VaR information in their annual reports.17(4) We can use VaR information to assess the risks of different investment opportunities before decisions are made. VaR- based decision rules can guide investment, hedging and trading decisions, and do so taking account of the implications of alternative choices for the portfolio risk as a whole.18(5) VaR information can be used to implement portfolio-wide hedging strategies that are otherwise rarely possible.19(6) VaR information can be used to provide new remuneration rules for traders, managers and other employees that take account of the risks they take, and so discourage the excessive risk-taking that occurs when employees are rewarded on the basis of profits alone, without any reference to the risks they took to get those profits. In short, VaR can help provide for a more consistent and integrated approach to the man- agement of different risks, leading also to greater risk transparency and disclosure, and better strategic management.

17For more on the use of VaR for reporting and disclosure purposes, see Dowd (2000b), Jorion (2001) or Moosa and Knight (2001).

18For further information on VaR-based decision rules, see Dowd (1999).

19Such strategies are explained in more detail in, e.g., Kuruc and Lee (1998) and Dowd (1999).

Box 1.2 What Exactly is VaR?

The term VaR can be used in one of four different ways, depending on the particular context:

1. In its most literal sense, VaR refers to a particularamount of money, the maximum amount we are likely to lose over some period, at a specific confidence level.

2. There is a VaR estimationprocedure, a numerical, statistical or mathematical procedure to produce VaR figures. A VaR procedure is what produces VaR numbers.

3. We can also talk of a VaRmethodology, a procedure or set of procedures that can be used to produce VaR figures, but can also be used to estimate other risks as well. VaR methodologies can be used to estimate other amounts at risk — such as credit at risk and cash flow at risk — as well as values at risk.

4. Looking beyond measurement issues, we can also talk of a distinctiveVaR approach to risk management. This refers to how we use VaR figures, how we restructure the company to produce them, and how we deal with various associated risk management issues (e.g., how we adjust remuneration for risks taken, etc.).

1.3.3 Criticisms of VaR

Most risk practitioners embraced VaR with varying degrees of enthusiasm, and most of the debate over VaR dealt with the relative merits of different VaR systems — the pros and cons of RiskMetrics, of parametric approaches relative to historical simulation approaches, and so on. However, there were also those who warned that VaR had deeper problems and could be dangerous.

A key issue was the validity or otherwise of the statistical and other assumptions underlying VaR, and both Nassim Taleb20(1997a,b) and Richard Hoppe (1998, 1999) were very critical of the na¨ıve transfer of mathematical and statistical models from the physical sciences, where they were well suited, to social systems where they were often invalid. Such applications often ignore important features of social systems — the ways in which intelligent agents learn and react to their environment, the non-stationarity and dynamic interdependence of many market processes, and so forth — features that undermine the plausibility of many models and leave VaR estimates wide open to major errors. A good example of this problem is suggested by Hoppe (1999, p. 1): Long Term Capital Management (LTCM) had a risk model that suggested the loss it suffered in the summer and autumn of 1998 was 14 times the standard deviation of its P/L, and a 14-sigma event shouldn’t occur once in the entire history of the universe. So either LTCM wasincrediblyunlucky or it had a very poor risk measure- ment model: take your pick.

A related argument was that VaR estimates are too imprecise to be of much use, and empirical evidence presented by Tanya Beder (1995a) and others in this regard is very worrying, as it suggests that different VaR models can give very different VaR estimates. To make matters worse, work by Marshall and Siegel (1997) showed that VaR models are exposed to considerable implementation risk as well — so even theoretically similar models could give quite different VaR estimates because

20Taleb was also critical of the tendency of some VaR proponents to overstate the usefulness of VaR. He was particularly dismissive of Philippe Jorion’s (1997) claim that VaR might have prevented disasters such as Orange County. Taleb’s response was that these disasters had other causes — especially, excessive leverage. As he put it, a Wall Street clerk would have picked up these excesses with an abacus, and VaR defenders overlook the point that there are simpler and more reliable risk measures than VaR (Taleb (1997b)). Taleb is clearly right: any simple duration analysis should have revealed the rough magnitude of Orange County’s interest-rate exposure. So the problem was not the absence of VaR, as such, but the absence of any basic risk measurement at all. Similar criticisms of VaR were also made by Culpet al. (1997): they (correctly) point out that the key issue is not how VaR is measured, but how it is used; they also point out that VaR measures would have been of limited use in averting these disasters, and might actually have been misleading in some cases.

of the differences in the ways in which the models are implemented. It is therefore difficult for VaR advocates to deny that VaR estimates can be very imprecise.

The danger here is obvious: if VaR estimates are too inaccurate and users take them seriously, they could take on much bigger risks and lose much more than they had bargained for. As Hoppe put it,

‘believing a spuriously precise estimate of risk is worse than admitting the irreducible unreliability of one’s estimate. False certainty is more dangerous than acknowledged ignorance’ (Hoppe (1998, p. 50)). Taleb put the same point a different way: ‘You’re worse off relying on misleading information than on not having any information at all. If you give a pilot an altimeter that is sometimes defective he will crash the plane. Give him nothing and he will look out the window’ (Taleb (1997a, p. 37)).

These are serious criticisms, and they are not easy to counter.

Another problem was pointed out by Ju and Pearson (1999): if VaR measures are used to con- trol or remunerate risk taking, traders will have an incentive to seek out positions where risk is over- or underestimated and trade them. They will therefore take on more risk than suggested by VaR estimates — so our VaR estimates will be biased downwards — and their empirical evidence suggests that the magnitude of these underestimates can be very substantial.

Others suggested that the use of VaR might destabilise the financial system. Thus, Taleb (1997a) pointed out that VaR players are dynamic hedgers, and need to revise their positions in the face of changes in market prices. If everyone uses VaR, there is then a danger that this hedging behaviour will make uncorrelated risks become very correlated — and firms will bear much greater risk than their VaR models might suggest. Taleb’s argument is all the more convincing because he wrote this beforethe summer 1998 financial crisis, where this sort of problem was widely observed. Similarly, Danielsson (2001), Danielsson and Zigrand (2001), Danielssonet al. (2001) and Basak and Shapiro (2001) suggested good reasons to believe that poorly thought through regulatory VaR constraints could destabilise the financial system by inducing banks to increase their risk taking: for example, a VaR cap can give risk managers an incentive to protect themselves against mild losses, but not against larger ones.

VaR risk measures are also open to criticism from a very different direction. Even if one grants the usefulness of risk measures based on the lower tail of a probability density function, there is still the question of whether VaR is the best tail-based risk measure, and it is now clear that it is not.

In some important theoretical work in the mid to late 1990s, Artzner, Delbaen, Eber and Heath examined this issue by setting out the axioms that a ‘good’ (or, in their terms, coherent) risk measure should satisfy. They then looked for risk measures that satisfied these coherence properties, and found that VaR did not satisfy them. It turns out that the VaR measure has various problems, but perhaps the most striking of these is its failure to satisfy the property of sub-addivity — namely, we cannot guarantee that the VaR of a combined position will not be greater than the VaR of the constituent positions individually considered. The risk of the sum, as measured by VaR, might be greater than the sum of the risks. We will have more to say on these issues in the next chapter, but suffice it for the moment to say that this is a serious drawback. Fortunately there are other tail-based risk measures that satisfy the coherence properties — most notably the expected tail loss (ETL), the expected value of losses exceeding VaR. The ETL is thus demonstrably superior to the VaR, but many of the other criticisms made of VaR also apply to the ETL as well — so risk measurers must still proceed with great caution.