• Tidak ada hasil yang ditemukan

BROWNIAN BRIDGE

(4.65)

Therefore,

(4.66) We can now express the difference of the last two terms in Equation 4.60 as follows:

(4.67)

where we exploited the fact that the sum in the first line of Equation 4.67 is a first-order approximation of the integral in the same line, and where we expanded the square root in the second line in Taylor series. To first order, Equation 4.60 is of first order in t:

(4.68) Taking the absolute value of this difference is also of first order in t. The expectation of the absolute value is also of first order in t. Therefore, the Euler method applied to the simple SDE in Equation 4.56 has order of strong convergence equal to 1. In this case, there is nothing to be gained by using the Milshtein scheme in place of the Euler scheme to construct sce- nario trajectories.

{W(t1), W(t2), }. How do you insert a random variable W(tk), where ti tk ti+1, into the set in such a manner that the resulting set still con- stitutes a Wiener process? You can view the Brownian bridge as a sort of interpolation that allows you to introduce intermediate points in the tra- jectory of a Wiener process when that trajectory is known at discrete time points. There are practical reasons why it is a good idea to be able to do this. For example, assume that you have a scenario set that gets reused by a number of pricing applications. Assume that some of the applications may require knowledge of the scenario set at a specific point in time not included in the original set. The Brownian bridge gives you a way of generating that missing part of the trajectories in the scenario sets without having to reconstruct the trajectories. A more profound reason, which we will explore in detail, has to do with the way the Brownian bridge is constructed. By adding one more element into the {W(t1), W(t2), } set, we increase the dimensionality of this set. In order to add a new element into the set, the Brownian bridge uses a new random vari- able whose variance is lower than the variance of the new element that is added. It turns out that increasing the dimensionality of a set of random variables by using an additional random variable of smaller variance is very desirable when quasi-random sequences are used. This is of signifi- cant importance in scenario generation.

Brownian Bridge Construction

Given W(t) and W(t t1 t2), we want to find W(t t1). We assume that we can get the middle point by a weighted average of the two end points plus an independent normal random variable:

(4.69) where ,, andare constants to be determined, and Z is a standard nor- mal random variable.

W(t t1) must satisfy the following conditions:

(4.70) (4.71) (4.72) These conditions give us the following equations for ,, and :

(4.73)

≤ ≤ …

+ + +

W t( +t1) = W t( )+W t( +t1+t2)+Z

+

cov[W t( +t1),W t( )] = t

cov[W t( +t1),W t( +t1+t2)] = t+t1 var[W t( +t1)] = t+t1

+ = 1 t+(t+t1+t2) = t+t1 2t+2t+2(t+t1+t2)+2 = t+t1

Solving these equations (this is very straightforward, despite the non- linearities in and ), we get

(4.74) (4.75) (4.76) The variance of the normal random variable that was added in order to construct W(t t1) is 2 t1. If instead of using the Brownian bridge, we would have created W(t t1) in the standard way,

(4.77) we would have used a normal random variable with variance t1. In the case of t1 t2, which corresponds to adding a component halfway between endpoints, the variance of the normal deviate used in the Brown- ian bridge is half the variance of the normal deviate used in Equation 4.77 (since in this case ).

Generating Scenarios with Brownian Bridges

We can use the Brownian bridge to generate a Wiener path and then use the Wiener path to produce a trajectory of the process we are interested in. We can use points from the Wiener path to get W(ti) and W(ti–1) and replace these values in the expression for analytical advancement, Equation 4.6, or in our numerical integration scheme, such as Equations 4.38 or 4.39.

The simplest strategy for generating a Wiener path using the Brownian bridge is to divide the time span of the trajectory into two equal parts and apply the Brownian bridge construction to the the middle point. We then repeat the procedure for the left and right sides of the time interval. To illustrate how this works, consider a case where you want to compute a Wiener trajectory from t 0 to t T, at four equidistant points.

We want to compute W(ti), 0 1,…, 4 where ti it, with t , and initial condition W(t0) W(0) 0. In this case, (see Equation 4.74).

These are the steps:

(4.78) t2

t1+t2 ---

=

= 1– = t1

+ =

+

W t( +t1)=W t( ) t1Z

=

1 2---

=

= =

= = = 4---t

= = = 12---

W t( )0 = 0

W t( )4 = W t( )0 + t4Z4 t4Z4

= W t( )2 1

2---(W t( )0 +W t( )4 )+ (t2t0)Z2

=

(4.78)

This is very easy to generalize to any number of time points if the number of time points is a power of two. Each Zi is the ith component of a multidi- mensional standard normal random variable where the dimensions are uncor- related. Notice that as you “fill in” the Wiener path, the additional variance of the normal components you add to the average of the two immediate W’s has decreasing value. Of course, the total variance of all the Wiener incre- ments (W(ti+1) W(ti)) does not depend on how you construct the path.

However, the fact that in the Brownian bridge approach you are using ran- dom variables that are multiplied by a factor of decreasing magnitude means that the importance of those variables also decreases as you fill in the path.

The dimensions of the random variables with larger variance need to be cov- ered, or sampled, more efficiently than the dimensions with smaller variance.

In standard Monte Carlo this is not an issue because standard Monte Carlo, where we sample from a joint normal distribution, is equally efficient at sam- pling from any dimension. But, as we will see in the next chapter, standard Monte Carlo is very slow. An alternative to standard Monte Carlo, called quasi-random sequence Monte Carlo, on the other hand, differs in its ability to cover lower dimensions, as compared with higher dimensions. This method for path construction, where the variance is concentrated in the first few dimensions (these are the ones you build first following the procedure above), reduces the burden on the simulation from having to sample effi- ciently from the higher dimensions. The effect of this way of constructing Wiener paths has been called dimensionality reduction in the literature. But it is important to understand that this approach does not reduce the number of dimensions. What it does is change the problem around such that some dimensions (these are called higher dimensions simply because of the order in which they occur in the algorithm described by Equation 4.78) face less activ- ity by the random variables used to get the Zi’s than others. This reduces the need to carefully sample from such dimensions. This is precisely what we need in order to use quasi-random sequences effectively. These issues are dis- cussed in detail by Caflisch, Morokoff, and Owen, 1997.

1 2---

= W t( )4 t2t0 ---2 Z2 +

W t( )1 1

2---(W t( )0 +W t( )2 )+ (t1t0)Z1

= 1 2---

= W t( )2 t1t0 ---Z2 1 +

W t( )3 1

2---(W t( )2 +W t( )4 )+ (t3t2)Z3

= 1

2---(W t( )2 +W t( )4 ) t3t2 ---Z2 3

=

As we noted after Equation 4.7, the traditional forward construction of the Wiener path,

(4.79) weighs each Zi by the same factor (assuming the time intervals are the same), causing all the dimensions of Z to be equally important.

At this point we can ask two questions about the Brownian bridge approach to trajectory building.

■ Why not use an approach like this directly on the process of interest, rather than on the driving Wiener process? For example, if we are interested in constructing a path for process S(t), why not use some- thing like the Brownian bridge directly on S(t)?

■ If we are dealing with a multidimensional Wiener process, where each is a vector of correlated variables, and we apply the Brownian bridge approach to each of the components of , will this spoil the correlation that must exist between the components of ?

The answer to these two questions touches on related issues. As far as the first question, the reason why the Brownian bridge is so straightforward is because we only need the variance to characterize the Wiener process (the expec- tation is satisfied automatically since we use standard normals to generate the Wiener process), and because the covariance between W(tk) and W(tl) only depends on the minimum of (tk,tl). This is not the case in general. A process may need multiple moments to be characterized, and we will not be able to construct it with the simple argument that worked so well for the Wiener process.

As far as the second question, the answer is no. If you have a multidi- mensional Wiener process, you can safely use the Brownian bridge construc- tion in each component and your correlations will be what they are supposed to be. This point is deeper than it seems, so we will elaborate. When we have a multidimensional process (don’t confuse this notion of multidimensional, which means a process driven by several, perhaps correlated Wiener pro- cesses, with the fact that each trajectory is characterized by time-indexed mul- tidimensional random variables), the correlation between the values of the various components is determined by the correlation between the changes in values. To clarify this, consider a process with K dimensions:

(4.80) In this representation we assume that k(.) is a scalar and that Wk(t), k 1,…,K are the components of a multidimensional Wiener process. Chapter 2

W t( )i = W t( i–1)+ titi–1Zi

Wˆ ( )ti

Wˆ ( )t

Wˆ ( )t

dSk( )t = ak(S1(( ),t … , SK( )t ,t))dt

K(S1(( ),… , St K( )t ,t))dWK( )t k =1, … , K +

=

discusses other representations of multidimensional processes. The correlation of the process levels, that is correlation (Sk(t), Sl(t)), is determined by the correlation of the process changes, (dSk(t), dSl(t)), but these two correlations are not equal in general. In the case of Wiener processes, however, they are equal. This allows us to preserve the correlation across dimensions by properly correlating the standard normals used in the bridge construction.

To see that the correlation of two Wiener processes is the same as the correlation of their increments, we consider two Wiener processes, W1(t) and W2(t), such that dW1(t)dW2(t) dt, and we find the correlation between W1(t) and W2(t), which we denote by We do this in a more rigorous way than needed in this simple case because this illustrates a general way that will work with more complicated processes:

(4.81)

To get E[W1(t)W2(t)], apply Ito’s lemma to g(W1,W2) W1(t)W2(t):

(4.82)

We now take expectation of both sides and integrate

(4.83)

=

W

1,W2.

W1( )t,W2( )t cov[W1( )t ,W2( )t ] var[W1( )t ]

= t

var[W2( )t ]

= t

---

=

cov [W1( )t ,W2( )t ] ---t

= 1

---t E[W1( )t W2( )t ] E[W1( )t ]

=0

E[W2( )t ]

=0

( – )

= 1

---Et [W1( )t W2( )t ]

=

             

         

= dg t( ) ∂g

W1

--- dW1g

W2

--- dW2 1

2--- ∂2g

W12 ---dW12

+ +

=

=W2 =W1 =0

2g

W1W2

--- dW1dW2 1

2---∂2g

W22 ---dW22

+ +

=1

=dt

=0

W2( )dWt 1( )t +W1( )dWt 2( )t +dt

=

        

         

  

E[dg t( )] = E[W2( )dWt 1( )t ] + E[W1( )dWt 2( )t ] +dt

=0 =0

dE[g t( )] = dt E[g t( )] = t

                 

Replacing for g(W1,W2), we get

(4.84) Substituting this in Equation 4.81, we confirm that

(4.85)

When we do the Brownian bridge for a multidimensional Wiener process, we do the procedure in Equations 4.78 on each dimension. Equation 4.85 suggests that we can get the correct correlation between dimensions if we impose that correlation on the normal random variables that enter in the con- struction procedure in Equation 4.78. To verify that this is the case, consider a Brownian bridge applied to two Wiener processes, W1 and W2, such that dW1(t)dW2(t) dt:

(4.86) (4.87) where Z(1) and Z(2) are the standard normal random variables used with dimen- sions 1 and 2, respectively. It is straightforward to verify that if cov [Z(1),Z(2)] .

The fact that the correlation between the level and the changes is the same for Wiener processes can also be expressed as follows. The correlation between finite changes across dimensions are the same as the correlation between infin- itesimal changes across dimensions.

This is not the case in general. To illustrate this, consider the case of two standard log-normal processes:

(4.88)

(4.89)

where dW1(t)dW2(t) dt. We want to get the correlation of S1(t) and S2(t) as a function of:

(4.90) E[W1( )t W2( )t ] = t

W1,W2 1

---tE[W1( )t W2( )t ]

=

=

=

W1( )t1 = W1( )t0 +(1–)W1( )t2 +Z( )1 W2( )t1 = W2( )t0 +(1–)W2( )t2 +Z( )2

W1( )t,W2( )t =

=

dS1 S1

--- = 1dt+1dW1( )t

dS2 S2

--- = 2dt+2dW2( )t

=

S1,S2 1

var[ ]S1 var[ ]S2

---(E[S1S2]+E[ ]ES1 [ ]S2 )

=

Using the same approach as before, define g(S1,S2) S1(t)S2(t) and apply Ito’s lemma:

(4.91)

Taking expectation on both sides and considering that 1,2, 1, and 2 are constant, we get

(4.92) This can be integrated to give

(4.93) since

(4.94)

Replacing in Equation 4.91, we get

(4.95)

As t 0, , but as time increases, decreases.

In conclusion, if we have a general multidimensional stochastic pro- cess, approximations of finite increments of the process will influence the correlation of process values across dimensions.

JOINT NORMALS BY THE CHOLESKI