Consequently, Linsmeier and Pearson continue, "Perhaps the best answer begins: "Value at risk is. Recipients can also specify the horizon period - next day, next week, month, quarter, etc.
INTENDED READERSHIP
ETL is the loss we can expect if we get a loss in excess of VaR. Fortunately, it turns out that we can always estimate ETL if we can estimate VaR.
USING THIS BOOK
My advice to those who may use the book for teaching purposes is the same: cover the tools first, then do the risk assessment. 2The user must copy the An Introduction to Market Risk Measurement(IMRM) folder to his or her MATLAB works folder and activate the path to the IMRM folder thus created (so that MATLAB knows that the folder is there).
OUTLINE OF THE BOOK
CONTRIBUTORY FACTORS
- A Volatile Environment
 - Growth in Trading Activity
 - Advances in Information Technology
 
One factor behind the rapid development of risk management was the high level of volatility in the economic environment within which firms operated. A third contributing factor to the development of risk management was the rapid advancement in the state of information technology.
RISK MEASUREMENT BEFORE V A R
- Gap Analysis
 - Duration Analysis
 - Scenario Analysis
 - Portfolio Theory
 - Derivatives Risk Measures
 
hedges are therefore inaccurate against yield changes that include shifts in the slope of the yield curve. The moral of the story is that the extent to which a new asset contributes to portfolio risk depends on.
VALUE AT RISK
- The Origin and Development of VaR
 - Attractions of VaR
 - Criticisms of VaR
 
It is simply a way of describing the magnitude of potential losses in the portfolio. Another characteristic of VaR is that it takes into account the correlations between different risk factors.
RECOMMENDED READING
They will therefore take on more risk than the VaR estimates suggest – hence our VaR estimates will be biased downward – and their empirical evidence suggests that the magnitude of these underestimates can be very large. The risk of the sum, as measured by VaR, can be greater than the sum of the risks.
THE MEAN–VARIANCE FRAMEWORK FOR MEASURING FINANCIAL RISK
- The Normality Assumption
 - Limitations of the Normality Assumption
 - Traditional Approaches to Financial Risk Measurement .1 Portfolio Theory
 
Equation (2.2b) is the normal quantile corresponding to the confidence levelcl (i.e. the lowest value we can expect at the stated confidence level) and allows us to answer quantity questions. The third advantage of the normal distribution is that it only requires estimates of two parameters – the mean and the standard deviation (or variance) – because it is completely described by these two parameters alone.
VALUE AT RISK
- VaR Basics
 - Choice of VaR Parameters
 - Limitations of VaR as a Risk Measure
 
The VaR of the diversified portfolio is therefore much greater than the VaR of the undiversified one. The VaR of the combined position is therefore greater than the sum of the VaRs of the individual positions; and the VaR is not sub-additive (Table 2.1).
EXPECTED TAIL LOSS
- Coherent Risk Measures
 - The Expected Tail Loss
 
This ETL can also be viewed as the cohesive measure of risk associated with one distribution function. The ETL therefore has the various advantages of sub-additivity, while the VaR does not.
CONCLUSIONS
The prices of the securities in each scenario are then generated by a suitable model (eg black model) and the risk measure is the maximum loss incurred, using the full loss for the first 14 scenarios and the 35% loss for the last two extreme scenarios. This risk measure can be interpreted as the maximum expected loss under each of 16 different probability measures, and is therefore a coherent measure of risk.
RECOMMENDED READING
DATA
- Profit/Loss Data
 - Loss/Profit Data
 - Arithmetic Returns Data
 - Geometric Returns Data
 
When using arithmetic returns, we implicitly assume that the intermediate payment Dt does not earn its own return. One answer is that the difference between the two will be negligible if the returns are small, and the returns will be small if we are dealing with a short time horizon.
ESTIMATING HISTORICAL SIMULATION V A R
With arithmetic returns, a low realized return - or a high loss - implies that the asset valuePt is negative, and a negative asset price rarely makes economic sense; on the other hand, a very low geometric return implies that the asset price falls to zero but is still positive. So, if our data is an array called 'Loss data', our VaR is given by the Excel command 'Large (Loss data,6)'.
ESTIMATING PARAMETRIC V A R
- Estimating VaR with Normally Distributed Profits/Losses
 - Estimating VaR with Normally Distributed Arithmetic Returns
 - Estimating Lognormal VaR
 
Since the data are in P/L form, VaR is indicated by the negative of the cut point between the lower 5% and upper 95% of P/L observations. In this case, VaR is given by the breakpoint between the upper 5% and lower 95% of the L/P observations.
ESTIMATING EXPECTED TAIL LOSS
To give an idea of what this might be, Table 3.2 lists some alternative ETL estimates obtained using this procedure, with varying values ofn. These results show that the estimated ETL rises and gradually converges to a value close to 2.062.
SUMMARY
To avoid these problems, we need to ensure that our risk factors are not too closely linked, and this requires us to select an appropriate set of risk factors and tailor our instruments accordingly. The final stage is to calculate VaR and/or ETL using the charted instruments (i.e. the synthetic substitutes) instead of the actual instruments we own.
A3.1 SELECTING CORE INSTRUMENTS OR FACTORS
Furthermore, since the principal components are independent of each other, the only non-zero elements of the variance–covariance matrix will be the principal volatility components, further reducing the number of parameters we would need. But if you were to use principal component analysis, you could probably replace a huge proportion of bond price movements with three principal components, and the only variance-covariance parameters needed would be the volatilities of the three principal components.
A3.2 MAPPING POSITIONS AND V A R ESTIMATION
Basically, we map equity risk to the market index and use equation (A3.5) to estimate the VaR of the mapped equity return. Once our bond has been mapped, we estimate its VaR by estimating the VaR of the mapped bond (i.e., the portfolio of zeros at 5 and 7 years, with weights ω and 1−ω).
A3.3 RECOMMENDED READING
COMPILING HISTORICAL SIMULATION DATA
The first task is to construct an appropriate P/L series for our portfolio, and this requires a set of historical P/L or return observations on the positions in our current portfolio. Instead, the historical simulation P/L is the P/L we would have earned on our current portfolio if we had held it for the entire historical sample period.1.
ESTIMATION OF HISTORICAL SIMULATION V A R AND ETL
- Basic Historical Simulation
 - Estimating Curves and Surfaces for VaR and ETL
 
It is more difficult to construct curves that show how VaR or ETL changes with holding period. In short, there is no theoretical problem per se with estimating HS VaR or ETL over any investment period.
ESTIMATING CONFIDENCE INTERVALS FOR HISTORICAL SIMULATION V A R AND ETL
- A Quantile Standard Error Approach to the Estimation of Confidence Intervals for HS VaR and ETL
 - An Order Statistics Approach to the Estimation of Confidence Intervals for HS VaR and ETL
 - A Bootstrap Approach to the Estimation of Confidence Intervals for HS VaR and ETL
 
We then generate a large number of such datasets and estimate the VaR of each. It is also interesting to compare the VaR and ETL confidence intervals obtained by the two methods.
WEIGHTED HISTORICAL SIMULATION
- Age-weighted Historical Simulation
 - Volatility-weighted Historical Simulation
 - Filtered Historical Simulation
 
Therefore, our core information – the information fed into the HS estimation process – is the paired set of P/L values and associated weights w(i), rather than the traditional paired set of P/L values and associated equal weights 1/ n. . Because λnw(1) is probably less than 1/n for any reasonable value of λandn, the shock—the ghost effect—will be smaller than it would be under equal-weighted HS.
ADVANTAGES AND DISADVANTAGES OF HISTORICAL SIMULATION
- Advantages
 - Disadvantages
 
For example, if there is a permanent change in exchange rate risk, it will usually take time for the HS VaR or ETL estimates to reflect the new exchange rate risk. In the most sophisticated versions of HS, such as FHS, or those proposed by HW, Duffie and Pan, or Holton (see Box 4.1), this constraint takes a softer form, and in periods of high volatility (or, depending on the approach used) , high correlation, etc.) we can get VaR or ETL estimates that exceed our largest historical loss.
Fortunately, we can deal with most of these problems (except the last) by suitable modifications of HS - especially by age weighting to deal with distortions, ghosting and novelty effects. However, we can estimate these confidence intervals by supplementing PCA or FA methods with order statistics or bootstrap approaches.
CONCLUSIONS
These include the functions 'hsvar' and 'hsetl', which estimate HS VaR and ETL, 'hsvardfperc' and 'hsetldfperc', which estimate percentiles from VaR and ETL distribution functions using the theory of order statistics, 'hsvargure' and 'hsetlgure', which produce pdf figures with HS VaR and ETL,. The IMRM toolbox also includes the quantile standard error functions discussed in Section 4.3.1 and the bootstrap VaR and ETL functions discussed in Section 4.3.3.
RECOMMENDED READING
They are also very easy to use, as they give rise to simple VaR and sometimes ETL formulas. When estimating VaR and ETL, we should always start with the question of whether and – if so – how we should adjust our data, and there is almost always a reason why we would want to do so.
NORMAL V A R AND ETL
- General Features
 - Disadvantages of Normality
 
If our VaR over a 1-day holding period is VaR(1,cl), then the VaR over a holding period of hpdays is VaR(hp,cl), given by:. It therefore never turns down and the VaR surface rises upwards as the confidence level and holding period approach their maximum values.
THE STUDENT t -DISTRIBUTION
If our distribution is not normal, but the departures from normality are 'small', we can approximate our non-normal distribution using the Cornish-Fisher expansion, which tells us how to adjust the standard normal variate to accommodate non-normal skewness and kurtosis. When using the Cornish-Fisher approximation, we must bear in mind that it will only provide a 'good' approximation if our distribution is 'close' to being normal, and we cannot expect it to be of much use , if we have a distribution that is too abnormal.
THE LOGNORMAL DISTRIBUTION
The log-normal VaR was previously illustrated in Figure 3.5, and the typical log-normal VaR area – that is, the VaR area with positive µR – is shown in Figure 5-6. This is shown in Figure 5.7, which is directly analogous to the normal zero mean case shown in Figure 2.9.
EXTREME VALUE DISTRIBUTIONS
- The Generalised Extreme Value Distribution
 - The Peaks Over Threshold (Generalised Pareto) Approach
 
The relationship of the location and scale parameters to the mean and variance is explained in Tool No. These decisions will have a critical effect on the VaR and on the shape of the VaR surface.
THE MULTIVARIATE NORMAL VARIANCE–COVARIANCE APPROACH
The explanation is that the shocks to the two returns perfectly offset each other, so the portfolio return is safe; the standard deviation of the portfolio return and VaR are therefore both zero. Apart from the special case where ρ =1 - in which case σpis alwaysσ - the standard deviation of the portfolio decreases as much as possible.
CONCLUSIONS
The curves in the figure are the standard deviations of the portfolio taking into account the effects of diversification divided by the standard deviation of the portfolio if the effects of diversification are not taken into account (ie, and invested entirely in one asset). Furthermore, given the parameters assumed in Figure 5.10, these same curves also give us the ratio of diversified to undiversified VaR - giving the VaR with diversification taken into account divided by the VaR without taking diversification into account (ie the VaR we would get). if we invested in only one asset).
RECOMMENDED READING
Taking normal VaR as an example, we have the functions 'normalvar', 'normalvarconfidence-interval', 'normalvardfperc', 'normalvarfigure', 'normalvarplot2D cl', 'normalvarplot2D hp' and. The IMRM Toolbox also has the functions 'cornishfishervar' and 'cornishfisheretl' which estimate VaR and ETL using the Cornish-Fisher expansion.
A5.1 DELTA–NORMAL APPROACHES
As a result, the price/volatility behavior of a callable or convertible bond can be quite different from that of a corresponding 'straight' bond. The rule of zero arbitrage must ensure that the price of any instrument with an embedded option is the same as the price of the corresponding straight instrument plus or minus the price of the embedded option.
A5.2 DELTA–GAMMA APPROACHES
14 A good example is the option position just considered, since the delta-gamma estimate of VaR is actually worse than the delta-normal one. Equation (A5.7) implies that the delta-gamma-normal procedure gives an estimate of VaR that is even higher than the delta-normal estimate, and the delta-normal estimate is already too large.
A5.3 CONCLUSIONS
This method also gives the portfolio's marginal VaRs, and is therefore very useful for risk decomposition. The second-order approximation approach used to handle nonlinearity in option positions can also be used to handle nonlinearity in bonds.
A5.4 RECOMMENDED READING
OPTIONS V A R AND ETL
- Preliminary Considerations
 - Refining MCS Estimation of Options VaR and ETL
 
It is therefore important to use all possible refinements – to use control variables, stratified sampling or whatever other refinements are appropriate, and to choose good values for MandN – to speed up the calculations and/or reduce the number Reduce. number of calculations required.3 6.1.2 An example: estimating the VaR and ETL of a US well. To illustrate, suppose we want to estimate the VaR and ETL of a US well.
- Basic Principal Components Simulation
 - Scenario Simulation
 
These findings suggest that we may want to focus on the first three principal components. Jamshidian and Zhu suggest that we can allow seven states for the first component, five for the second, and three for the third.
FIXED-INCOME V A R AND ETL
- General Considerations
 - A General Approach to Fixed-income VaR and ETL
 
So we need a projected future term structure to price our instruments at the end of the holding period so that we can then estimate future earnings and thereby estimate VaR and ETL. For convenience, we assume that the term structure is unchanged at 5%, so all spot rates are 5%.
ESTIMATING V A R AND ETL UNDER A DYNAMIC PORTFOLIO STRATEGY
We also assume that when the value of the portfolio increases (or decreases), α% of the increase (or decrease) in the portfolio value is invested in (or withdrawn from) shares of the venture asset. The effect of the filter rule strategy on VaR is shown in Figure 6.2, which shows an illustrative VaR against different values for the participation rate.
ESTIMATING CREDIT-RELATED RISKS WITH SIMULATION METHODS
We also need to consider the rate of recovery and how this may change over time. We can examine the impact of recovery rate by plotting VaR against a range of recovery rates, as in Figure 6.3.
ESTIMATING INSURANCE RISKS WITH SIMULATION METHODS
We then adjust losses for the deductible and pricing policy and determine the VaR and ETL from the sample of adjusted losses. A parameterized example is given in Figure 6.4, which shows a histogram of simulated L/P values and associated VaR and ETL estimates – in this case, the L/P histogram has a long right tail (which is actually lognormal), and VaR and ETL are equal to 17.786 or 24,601.
ESTIMATING PENSIONS RISKS WITH SIMULATION METHODS
- Estimating Risks of Defined-benefit Pension Plans
 - Estimating Risks of Defined-contribution Pension Plans
 
In this context, pension risk is the risk that the pension provider's assets do not meet its obligations. The programming strategy is to model the pension fund's terminal value during each simulation.
CONCLUSIONS
Running these calculations, we find that the pension ratio has a sample mean of 0.983 and a sample standard deviation of 0.406 - which should indicate that DC schemes can be very risky even without looking at any VaR analysis. The pension VaR - the likely worst pension outcome at the relevant (and in this case, 95%) confidence level - is 0.501, and this suggests that there is a 95%.
RECOMMENDED READING
For example, if we have a portfolio made up of specific positions, the portfolio VaR can be broken down into components, known as component VaRs or CVaRs, which tell us how much each position contributes to the overall portfolio VaR. Measures of IVaR can be used as a tool in risk-return decision-making (e.g., we can use IVaRs to determine the required returns on prospective investments; see, e.g., Dowd (1998a, ch. 8)) and to set position limits. set (see e.g. Garman (1996b)).
INCREMENTAL V A R
- Interpreting Incremental VaR
 - Estimating IVaR by Brute Force: The ‘Before and After’ Approach
 
If it is a 'small' relative peak, we can approximate the VaR of our new portfolio (ie VaR(p+a)) by . Once we have the portfolio's VaR and delVaRs, we can then take any candidate trade, map the trade, and use the mapped trade and delVaRs to estimate the IVaR associated with that candidate trade.
COMPONENT V A R
- Properties of Component VaR
 - Uses of Component VaR .1 ‘Drill-down’ Capability
 
This is restrictive because it implies that the component VaR is proportional to the position size: if we change the size of the position byk%, the component VaR will also change by k%. However, it also allows us to break down our risks into multiple levels, and at each level the component risks will properly add up to the total risk of the entity at the next level up.
CONCLUSIONS
Comparing implied insights about returns with actual insights is a useful tool to help understand how portfolios can be improved. A good example, suggested by Litterman (1997b, p. 41), occurs in large financial institutions whose portfolios are influenced by large numbers of traders operating in different markets: at the end of each day, the implicit views of the portfolio are estimated and compared with the actual views of, for example, internal forecasters.
RECOMMENDED READING
Best duplicate portfolios are very useful for identifying macro portfolio hedges – hedges against the portfolio as a whole. Any differences between actual and implicit views can then be reconciled by taking positions to reconcile implicit views.
LIQUIDITY AND LIQUIDITY RISKS
The bid-ask spread also has an associated risk because the spread itself is a random variable. However, if our position is large relative to the market, our activities will have a noticeable effect on the market itself, and can affect both the 'market' price and the bid-ask spread.
ESTIMATING LIQUIDITY-ADJUSTED V A R AND ETL
- A Transactions Cost Approach
 - The Exogenous Spread Approach
 - The Market Price Response Approach
 - Derivatives Pricing Approaches
 - The Liquidity Discount Approach
 - A Summary and Comparison of Alternative Approaches
 
The second term gives the effect of the bid-ask spread on transaction costs, increased by the amount redeemed at the end of the holding period. In such cases, they suggest that we think about liquidity risk in terms of the spread between supply and demand and its volatility.
ESTIMATING LIQUIDITY AT RISK (L A R)
The second point is that positions that may be similar from a market risk perspective (eg, such as a futures hedge and an options hedge) may have very different cash flow risks. The choice of engine will depend on the types of cash flow risks we have to face.
ESTIMATING LIQUIDITY IN CRISES
In such circumstances, it is often better to build a model for measuring liquidity risk from scratch, and we can begin to outline the basic types of cash flows to consider. They suggest that we should use a method of estimating the worst-case losses – they actually suggest a Greece-based approach – and then derive the cash flows by applying margin requirements to the loss involved.
RECOMMENDED READING
However, as with all scenario analysis, the results of these exercises are highly subjective, and the value of the results is critically dependent on the quality of the assumptions made. Backtesting is a critical part of the risk measurement process, as we rely on it to give us an indication of any problems with our risk measurement models (eg, such as misspecification, underestimation of risks, etc.).
PRELIMINARY DATA ISSUES
- Obtaining Data
 
Given the number of observations (200) and the VaR confidence levels, we would expect 10 positive and 10 negative exceptions, and in reality we get 10 positive and 17 negative exceptions. The number of negative exceptions (or tail losses) is well above what we would expect, and the risk expert would do well to investigate this further.
STATISTICAL BACKTESTS BASED ON THE FREQUENCY OF TAIL LOSSES
- The Basic Frequency-of-tail-losses (or Kupiec) Test
 - The Time-to-first-tail-loss Test
 - A Tail-Loss Confidence-interval Test
 - The Conditional Backtesting (Christoffersen) Approach
 
4 The Kupiec test also provides useful information about the pattern of tail losses over time. We can construct a confidence interval for the number of tail losses using the inverse of the tail loss binomial distribution (eg, using the 'binofit' function in MATLAB).
STATISTICAL BACKTESTS BASED ON THE SIZES OF TAIL LOSSES
- The Basic Sizes-of-tail-losses Test
 - The Crnkovic–Drachman Backtest Procedure
 - The Berkowitz Approach
 
The main difference between these tests is that the tail loss size test considers loss sizes that exceed VaR, while the Kupiec test does not. This means that the tail loss size test can be considered as a generalization of the (first) CD test.
FORECAST EVALUATION APPROACHES TO BACKTESTING
- Basic Ideas
 - The Frequency-of-tail-losses (Lopez I) Approach
 - The Size-adjusted Frequency (Lopez II) Approach
 - The Blanco-Ihle Approach
 - An Alternative Sizes-of-tail-losses Approach
 
This 'Lopez I' loss function is intended for the user concerned (solely) with the frequency of tail losses. This loss function gives each tail loss observation a weight equal to the tail loss divided by the VaR.
OTHER METHODS OF COMPARING MODELS
ASSESSING THE ACCURACY OF BACKTEST RESULTS
In this case, we can be at least 95% sure that the true prob value is above the critical level, and we can pass the model with confidence. In this case, we can be at least 95% sure that the true prob value is below the critical level, and we can confidently reject the model.
BACKTESTING WITH ALTERNATIVE CONFIDENCE LEVELS, POSITIONS AND DATA
- Backtesting with Alternative Confidence Levels
 - Backtesting with Alternative Positions
 - Backtesting with Alternative Data
 
If we take a large number of such samples, we can get an idea of the distribution of these probable value estimates. We can also backtest at any company level we want, down to the level of the individual trader or asset manager, or the individual instrument we hold.
SUMMARY
Therefore, we need to bootstrap our posterior tests to get a better idea of the distribution of the prob value estimates. The IMRM toolbox also includes functions for a modified version of the Crnkovic-Drachman test assuming normal P/L, 'modifiednormalCDbacktest', Kupiec backtest, 'kupiecbacktest', Lopez prediction estimation backtest, 'lopezbacktest', tail sizes - The last statistical loss test for normal P/L, the 'normaltaillossesbacktest', discussed in Section 9.3.1, and the result of estimating the loss prediction of tail sizes presented in Section 9.4.5, the 'tailloss-FEbacktest'.
RECOMMENDED READING
Stress tests are especially good for quantifying what we could lose in crisis situations. Stress testing is a natural complement to probability-based risk measures such as VaR and ETL.
BENEFITS AND DIFFICULTIES OF STRESS TESTING
- Benefits of Stress Testing
 - Difficulties with Stress Tests
 
Even stress testing is completely dependent on the selected scenarios and thus the judgment and experience of the people performing the stress testing. The result of this process can be thought of as a set of P/L numbers and their associated probabilities.
SCENARIO ANALYSIS
- Choosing Scenarios
 - Evaluating the Effects of Scenarios
 
We may also want to consider the impact of other factors, such as changes in the slope or shape of the yield curve, a change in correlations and a change in credit spreads (eg a jump or dip in the TED spread). Alternatively, we can focus on the mean of the conditional loss distribution, in which case our stress loss is X1tR1t+X2tΣ21Σ−111R1t.
MECHANICAL STRESS TESTING
- Factor Push Analysis
 - Maximum Loss Optimisation
 
The solution to this last problem is to look for losses that occur for intermediate and extreme values of the risk variables. If the portfolio consists of straight positions, each of which takes its maximum loss at extreme values of the underlying risk factors, then FP and MLO will give exactly the same results, and we can also use the simpler computational approach of FP .
CONCLUSIONS
We can obtain comparable expressions for multi-option portfolios provided we can model the relationship—and more specifically, the variance-covariance matrix—between the underlying variables.
RECOMMENDED READING
11 Model Risk
MODELS AND MODEL RISK
- Models