• Tidak ada hasil yang ditemukan

CONTROL VARIATES

Theorem 4.1 Theorem 4.1 The acceptance-rejection algorithm generates a random vari- able X such that

6.4 CONTROL VARIATES

104 VARIANCE REDUCTlON TECHNIQUES

U <- 0

z <- 0

ST <- rep(0,NB)

Ci <- rep(0,NB)

Ci.bar <- 0 varr <- 0

L <- {log(K/SO)-(mu-sigma-2/Z)*t~/(sigma*t)

f o r (i in O:(B-I))

U

I

<- runif(NB)

v <- (u+i)/B

z <- qnorm(v*(I-pnorm(L))+pnorm(L))

f o r (j in 1:NB)

t ST[j] <- SO*exp(nu*t+sigma*sqrt(t)*z[j])

Ci [j] <- exp(-mu*t) *rnax(ST[jl -K, 0)

>

Ci.bar <- Ci.bar + mean(Ci) varr <- varr + var(Ci)

>

C <- (l-pnorm(L))*Ci.bar/B

SE <- sqrt (varr/NB) /B

C SE

## The theoretical Black-Scholes value is 1.0139

CONTROL VARIATES 105

Bins (B) NB Mean (C) Std. Err. Adj. Mean SE 1 1000 0.9744 0.0758 0.9842 0.1102

2 500 1.0503 0.0736 1.0303 0.0823

5 200 1.0375 0.0505 1.0235 0.0524

~~ ~

10 100 0.9960 0.0389 1.0101 0.0404

20 50 0.9874 0.0229 1.0058 0.0238

50 20 1.0168 0.0146 1.0147 0.0153

100 10 0.9957 0.0092 1.0089 0.0095

200 5 1.0208 0.0094 1.0160 0.0099

500 2 1.0151 0.0062 1.0143 0.0066

1000 1 1.0091 NA 1.0125 NA

Table 6.2 Effects of stratification for simulated option prices with restricted normal.

We would like to find c such that U& is minimized. Differentiate the preced- ing expression with respect to c and set it equal to zero, we have

2cVarY

+

2Cov(X, Y) = 0.

Solving for such a c, we get, c* = -Cov(X,Y)/VarY as the value of c that minimizes U&, . For such a c*,

C0v2(X, Y) o$ = VarX -

Vary

The variable Y used in this way is known as a control variate for the sim- ulation estimator X. Recall that Corr(X, Y) = Cov(X, Y)/(VarXVarY)1/2.

Therefore,

Hence, as long as Corr(X, Y)

#

0, some form of variance reduction is achieved.

In practice, quantities like U$ = Vary and Cov(X,Y) are usually not avail- able, they have to be estimated from the simulations based on sample values.

106 VARIANCE REDUCTION TECHNIQUES

1

n - 1

COv(X,Y) = -

C(Xi

-

X ) ( E

-

Y),

i=l

COv(X, Y ) c* = - 8$

Suppose we use X from simulation t o estimate B. Then the control variate would be

Y

and the control variate estimator is

x + c*(F

- p y ) ,

with variance equaling t o

-(VarX 1 - C0v2(X,Y) ) = - ( l - p U; 2 ).

n Vary n

Equivalently, one can use the simple linear regression equation

X

= a

+

bY

+

e, e

-

i.i.d. (0, c 2 ) , (6.2)

t o estimate c*. In fact, it can be easily shown that the least squares estimates of b,

6

= -E*, see Weisberg (1985). In such a case, the control variate estimator is given by

x +

c*(Y - p y ) =

X

-

&(Y

- p y ) = 6

+

i p y , (6.3) where 6 = X -

6y

is the least squares estimate of a in (6.2). That is, the control variate estimate is equal t o the estimated regression equation evaluated at the point p y .

Notice that there is a very simple geometric interpretation using (6.2).

First observe that the estimated regression line

8

= U+bY

= x + q Y

- Y ) .

Thus, this line passes through the point ( Y , X ) . Second, from (6.3), X," = l!i+ipy =

x

- &(Y - p y ) .

Suppose that

Y <

p y , that is, the simulation run underestimates py and suppose that X and Y are positively correlated. Then it is likely that

X

would underestimate E(X) = 8. We therefore need to adjust the estimator upward and this is indicated by the fact that

6

= -?

>

0. The extra amount

CONTROL VARIATES 107

that needs to be adjusted upward equals -b(Y - p y ) , which is governed by the linear equation (6.3).

Finally, 327 the regression estimate of r2 is the estimate of Var(X - by) = Var(X

+

P Y ) . To see this, recall from regression that

l n

= n -

C(Xi

- ii - b y , ) 2

2= 1

= - I C ( X 2 - ( X -

bY)

- iY,y

Z=l

l n

= - C((X2 -

X) + b(y,

- Y ) ) Z

2 = 1

n

. n

= V&X) - b2vkr(Y)

= V&(X -by).

The last equality follows from a standard expansion of the variance estimate (see exercise 6.2). It follows that the estimated variance of the control variate estimator

X + C*(Y

- p y ) is 2 / n .

Example 6.8 Consider the problem 0 = E(eu) again.

Clearly, the control variate is U itself. Now

Cov(eU, U ) = E(Ueu) - E ( U ) E ( e u )

=

k'

xe" d x - ( e - 1)/2

= 1 - ( e - 1)/2 = 0.14086.

The second last equality makes use of the facts from the previous examples that E(U) = 1/2,VarU = 1/12, and Var(eu) = 0.242. It follows that the control variate estimate has variance

Var(e'/

+

c*(U - l/2)) = Var(eu)(l - 12(0.14086)2/0.242) = 0.0039, resulting a variance reduction of (.242 - .0039)/.242 x 100% = 98.4%.

use of outputs from the multiple linear regression model given by

0 In general, if we want to have more than one control variate, we can make

k

X = a

+

biY,

+

e , e N 2.i.d. (0, r 2 ) .

i= 1

108 VARIANCE REDUCTION TfCHNlQUES

In this case, the least squares estimates of a and bis, 6, and bis can be easily shown to satisfy 2: = -bi, i = 1,. . .

,

k. Furthermore, the control variate estimate is given by

k k

i=l 2= 1

where E(Y,) = pi, i = 1 , .

.

.

,

k. In other words, the control variate esti- mate is equal to the estimated multiple regression line evaluated at the point ( p l

, . .

.

,

pk). By the same token, the variance of the control variate estimate is given by i 2 / n , where

e2

is the regression estimate of 02.

Example 6.9 Plunging along the same line, consider simulating the vanilla European call option as in Example 6.5, using the terminal value ST as the control variate.

The control variate estimator is given by

CCV =

c

-t- C * ( S T - E(ST)).

Recall ST = Soe(VT+anZ), it can be easily deduced that

E(ST)

= SOerT, (6.4)

var(sT) = Sie2rT(eUzT - 1). (6.5) The algorithm goes as follows:

1. For i = 1,. .

. ,

N I , simulate a pilot of N I independent paths to get sT(i) = SOeVT+unZa,

C ( i ) = e-TT max(0, sT(i) - K ) .

2. Compute

E(&)

as SOerT or estimate it by S T ( ~ ) / N ~ . Compute

Czl(S~(i)

-

S T ) ~ .

Var(ST) as S ~ e 2 r T ( e g 2 T - 1) or estimate it by Now estimate covariance by

where C = C(z)/N1 and

9~

=

xz,

ST(i)/N1.

3. Repeat the simulations of ST and C by means of control variate. For i = 1,

. . . ,

N2

,

independently simulate

- sOeuT+uflZi

sT(i) - 7

C(i) = e-TT max(0, sT(i) - K ) , Ccv(i) = C ( i )

+

c ’ ( S ~ ( i ) - E ( s ~ ( i ) ) ) ,

CONTROL VARIATES 109 where c* = -C&(&, C)/V&ST is computed from the preceding step.

4. Calculate the control variate estimator by

Complete the simulation by evaluating the standard error of C?cv and construct confidence intervals.

Here is the SPLUS code and output. (7

Nl <- 500

N2 <- 50000

so

<- 10

K <- 12

r <- 0.03

sigma <- 0.4

nu <- r-sigma-2/2

t <- 1

ST <- rep(0,Nl)

C <- rep(0,Ni)

ST2 <- rep(O,N2)

C2 <- rep(O,N2)

CCV <- rep(O,N2) for (i in 1:Ni)

z <- rnorm(1)

ST[i] <- SO*exp(nu*t+sigma*sqrt (t)*z) C [i] <- exp(-r*t) *max(ST[il -K,O) ST.bar 3 <- SO*exp(r*t)

VarST.hat <- S0-2*exp(2*r*t)*(exp(sigma-2*t)-l)

#ST.bar <- mean(ST) C.bar <- mean(C)

Cov.hat <- sum( (ST-ST.bar>*(C-C.bar))/(Nl-l)

#VarST.hat <- sum((ST-ST.bar)-2)/(Ni-l)

c <- -Cov.hat/VarST.hat .

for (i in l:N2)

c

z <- rnorm(1)

ST2 [i] <- SO*exp(nu*t+sigma*sqrt (t) *z>

c2 [i] <- exp(-r*t) *max(ST2 Cil -K, 0) Ccv[i] <- C2 [i] +c*(ST2 [il -ST. bar)

110 VARlANCE REDUCT/ON TECHNlQUES

>

CCV.bar <- mean(CCV)

Var.CCV <- sum((CCV-CCV.bar>-2>/(N2-1)

SE <- sqrt(Var.CCV)

CI <- CCV.bar-1.96*SE/sqrt(N2)

CI [2] <- CCV. bar+l .96*SE/sqrt (N2)

CCV

.

bar

CI

For N I = 500 and N2 = 50,000, we have a 95% confidence interval for CCV of j1.0023 1.02471. In this case, the estimated call price is 1.0135 with standard error 0.0057.

In using control variates, there are a number of features that should be kept in mind.

0 What should constitute the appropriate control? We have seen that in simple cases, the underlying asset prices may be appropriate. In more complicated situation, we may use some easily computed quantities that are highly correlated with the object of interest as control variates. For example, standard calls and puts frequently provide convenient source of control variates for pricing exotic options, and so does the underlying asset itself.

0 The control variate estimator is usually unbiased by construction. Also, we can separate the estimation of the coefficients (2;) from the estima- tion of prices.

0 The flexibility of choosing the ~s suggests that we can sometimes make optimal use of information. In any event, we should exploit the specific feature of the problem under consideration, rather than generic appli- cations of routine methods.

0 Because of its close relationship with linear regression, control variates are easily computed and explained.

0 We have only covered linear control. In practice, one can consider using nonlinear control variates, for example, X U / p y . Statistical inference for nonlinear control may be tricky though.