• Tidak ada hasil yang ditemukan

Optimization with non-normal return expectations

5 Some choices in forecast construction

5.5 Optimization with non-normal return expectations

112 Forecasting Expected Returns in the Financial Markets

• if they are equal confidence, their effects will be the average of the two indepen­

dently

• if one is high confidence, it will dominate the low confidence input

3. If a forecast is missing from the update set of ‘fuzzy’ simultaneous equations, then this value will be filled in from the prior.

Hence this is a very convenient way of entering multiple relative value views; however, the same basic analysis can also be used to enter factor views. The key to this is to observe that in equation (5.12) above, x can be allowed to represent the asset return (rather than the relative return) and r to represent the factor return (rather than the asset return). Because of the conjugate prior structure of these equations, we can therefore merge factor views with our asset and relative value views by repeatedly using the same basic equation.

The final type of insight that we wish to incorporate into our mixture of normals approach is views on scenarios and scenario probability. In these models, a series of discrete states are postulated to represent the expected distribution of return. As with the other forecasting techniques, the independence or otherwise of these scenarios needs to be carefully considered and a sufficient number generated to allow any optimization process to identify a realistic risk-return tradeoff. Typically, this requires many more scenarios than there are assets.

As the number of such individual scenarios increases, the task of generating them becomes more tedious. Hence our formulation of stochastic scenarios, where we assume states of the world each with its own probability distribution in multivariate normal form from which a set of individual scenarios can be drawn – i.e. in our formulation, scenarios are the sum of a number of normal distributions with different probability.

In this way we have arrived at the general form of our mixture of normals approach where the resulting non-normal distribution can be represented by a Monte Carlo dis­

tribution. When we do this, it is then trivial to see that we can apply arbitrary payoff functions to this distribution to capture the effects of optionality, convertible bonds or structured products. The problem then is to optimize, given this Monte Carlo set, as discussed in the next section.

113 Some choices in forecast construction

return distributions, you need to consider a threshold measure of risk. The three best known are:

1. Value at risk (VaR), which relates the probability of falling below a return threshold to the level of that return threshold

2. Mean excess loss (also known as conditional value at risk, CVaR), which multiplies that downside probability by the average amount by which the threshold is exceeded 3. Loss gain ratio, which takes that weighted downside value and divides it by the

weighted upside.

Each of these approaches has its own strengths and weaknesses. The best known is value at risk (VaR). This identifies the confidence that return on a portfolio over a selected period will be above a given threshold.

There are three major problems with VaR as a risk measure:

1. It does not take into account the distribution conditional on the threshold having been exceeded, therefore leaving the investor uninformed about the severity of any possible long tail risk

2. The VaR for sub sets of the portfolio do not sum to the VaR for the portfolio, which makes diagnosing which are the most risky positions within the portfolio less intuitive 3. Optimization using VaR is a mixed integer optimization problem, which is very com­

putationally expensive to calculate.

The next well-known threshold risk measure is CVaR (or mean excess loss). This measure describes the average expected loss conditional on a return threshold being exceeded. This measure is sub-additative and convex (see Artzner et al., 1999). In particular it is always greater than VaR, hence reducing the maximum CVaR within a set of portfolios will also reduce the maximum VaR. In practical examples, optimizing on CVaR has been found to provide very close to optimum VaR solutions while being far more easily calculated because it can be optimized using a linear programming algorithm, as we shall see below (see also Uryasev, 2000), hence it addresses all the major weaknesses of VaR listed above.

However, from an investment point of view CVaR misses an important component of the problem in that given two portfolios with the same CVaR, a rational investor would prefer whichever portfolio had the greater upside potential – hence the final one of our threshold measures, the loss gain ratio. This divides CVaR by the upside potential measured as the weighted probability of exceeding the threshold return. The disadvantage of this final measure is that it reflects the optimality of your portfolio in risk per unit return, rather than providing a pure (downside) risk measure. In practice, most investors would want to see both the CVaR measure and the loss gain ratio. Because VaR is such a widely quoted risk measure, this may well be the statistic to report even though the other two are the ones used for portfolio selection.

Luckily, all three threshold risk measures are easily computed as part of the CVaR optimization process. Here we are assuming that we are optimizing based on a set of Monte Carlo simulated time series.

At each point in time, each asset has a simulated return – hence, if we consider the weight space encompassing all our possible portfolios, we can draw a hyperplane in this asset weight space where all portfolios exceeding the threshold return lie above this hyperplane while all others lie on the other side.

114 Forecasting Expected Returns in the Financial Markets

The simplex linear programming method solves a problem of the form:

Maximize the function Z=a1x1 +a2x2 + · · ·anxn subject to

xi ≥0 i=1 n and iconstraints of the form

ai1x1 +ai2x2 + · · ·ainxn bi or

ai1x1 +ai2x2 + · · ·ainxn bi or

ai1x1 + · · ·ai2x2 +ainxn =bi

These constraints also form hyperplanes in weight space. These effectively bound a feasible region within which solutions are allowed. This region can be shown to be convex, so the maximum feasible value of the objective function occurs at a vertex on its boundary. The algorithm proceeds by finding an initial feasible solution and then finding a better adjacent vertex, moving to this and then repeating until no further improvement is possible.

Unfortunately our initial problem formulation is similar but not identical to this LP standard form. However, some simple tricks can convert one to the other. The main difficulty is that we can have viable solutions on either side of our hyperplanes in weight space. If we consider one such hyperplane, we can add two dummy variables (say d+ and d) to our problem definition and constrain the values of these dummy variables such that they represent positive and negative distances above and below this hyperplane respectively. These are now firm constraints on the dummy variables, so bounding the augmented feasible region. Replacing all our original constraints with new ones in this form allows the objective function to be the sum of distances below each hyperplane.

Hence minimizing this minimizes CVaR.

In this formulation it is also easy to count the number of hyperplanes above and below any point in weight space and to evaluate the upside risk as well as the downside risk.

Hence calculation of the VaR and loss gain ratio is a simple extension of this approach, even if neither is easy to optimize directly; VaR because it is an integer function (counting the number of hyperplanes above any given point in weight space), and the loss gain ratio because it is a ratio of the upside and downside risk, so CVaR as well as being the easiest to interpret risk measure is also the only linear measure that we can optimize using this approach.

115 Some choices in forecast construction