GT = exp
(k 1'
log St dt).
EXERCISES 47 By Theorem 3.2 and Lemma 3.1, determine the analytical solution for the fixed strike geometric Asian call option. (Hints: 1. Apply the result of question 7(b) of Chapter 2, 2. You can find the answer in Chapter 7.)
5. Consider the partial differential equation:
By modifying the proof of Theorem 3.2, show that
where XT is the solution to the SDE:
dX = ~ ( 7 , X ) dr
+
O ( T , X ) dW,, X t = X .This result is called the Feyman-Kac formula.
6. Suppose the risk-free interest rate and the volatility of an asset are deterministic functions of time. That means,
T = ~ ( t ) and f~ = a(t).
(a) Show that the Black-Scholes equation governing European option prices, f ( t , S ) , is given by
af
1a2f a f
at 2
-
+
- fJ2(t)S2- dS2+
T(t)S- dS - T ( t ) f = 0.(b) Show that the European call option price satisfies:
T ( 7 ) d T -
f(t, S ) = e-
AT
E [ ~ ~ ( S T - K,O)], wheredS, = r ( r ) S , d r
+
a(r)S, dW,, r>
t , and St = S.Hint: Use the result of question 5.
(c) Hence, show that
f (t, S ) = C B S ( t ,
s;
T = F, fJ =a),
where CBS is the Black-Scholes formula for call option with con- stant parameters,
~ ( ~ ) d r and T - t
48 BLACK-SCHOLES MODEL AND OPTION PRlClNG
7. A stochastic process X ( t ) is said to be a martingale under a probability measure P if E P I X ( t ) l X ( s ) , s
<
71 = X ( T ) , with probability one.(a) Consider the asset price dynamics under the risk-neutral measure:
dS = rSdt
+
aSdW.Show that X ( t ) = S ( t ) e-Tt is a martingale.
(b) Denote C(t, S; 7') as the Black-Scholes formula for a European call option with maturity T . Show that Ce'(T-t) is a martingale.
Generating Random 4
Va ri a b 1 e s
4.1 INTRODUCTION
The first stage of simulation is generation of random numbers. Random num- bers serve as the building block of simulation. The second stage of simulation is generation of random variables based on random numbers. This includes generating both discrete and continuous random variables of known distri- butions. In this chapter, we shall study techniques for generating random variables
.
4.2 RANDOM NUMBERS
Random numbers can be generated in a number of ways. For example, they were generated manually or mechanically by spinning wheels or rolling dice in the old days. Of course, the notion of randomness may be a subjective judg- ment. Things that look apparently random may not be random according to the strict definition. The modern approach is to use a computer to gen- erate pseudo-random numbers successively. These pseudo-random numbers, although deterministically generated, constitute a sequence of values having the appearance of uniformly (0, 1) distributed random variables.
One of the most popular devices to generate uniform random numbers is the congruential generator. Starting with an initial value 5 0 , called the seed, the computer successively calculates the values z,, n
2
1 viaz, = az,-l+ c modulo m, (4.1) 49 Sinzulution Techniques in Finunciul Rish Munugenzent
by Ngai Hang Chan and Hoi Ying Wong Copyright 0 2006 John Wiley & Sons, Tnc.
50 GENERATlNG RANDOM VARIABLES
where a,c, and m are given positive integers, and the equality means that the value ax,-l is divided by rn and the remainder is taken as the value of
5,. Each x, is either 0,1,.
. . ,
m - 1 and the quantity is taken as an approximation to the values of a uniform (0, 1) random variable. Since each of the numbers x, assumes one of the values of 0,1,.. . ,
m - 1, it follows that after some finite number of generated values a value must repeat itself. For example, if we take a = c = 1 and m = 16, thenx, = xn-l
+
1 modulo 16.With xo = 1, then the range of x, is the set
{0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,0,.
.
.}.When a = 5, c = 1, and m = 16, then the range of x, becomes {0,1,6,15,12,13,2,11,8,9,14,7,4,5,10,3,0,.
.
.}.We usually want to choose a and m such that for any given seed XO, the number of variables that can be generated before repetition occurs is large.
In practice, one may choose m = 231 - 1 and a = 75, where the number 31 corresponds to the bit size of the machine.
Any set of pseudo-random numbers will by definition fail on some problems.
It is therefore desirable to have a second generator available for comparison.
In this case, it may be useful to compare results for a fundamentally different generator.
From now on, we will assume that we can generate a sequence of random numbers that can be taken as an approximation to the values of a sequence of independent uniform (0, 1) random variables. We will not explore the technical details about the construction of good generators, interested reader may consult L’Ecuyer (1994) for a survey of random number generators.
4.3 DISCRETE RANDOM VARIABLES
A discrete random variable X is specified by its probability mass function given by
P ( X = s j ) = p j r j = 0 , 1 ,
...,
c p j = l . (4.2)j
To generate X , generate a random number U , which is uniformly distributed in (0, 1) and set
xo if U < P O ,
21 if P O < U < P O + P l ,
DISCRETE RANDOM VARIABLES 51
Recall that for 0
<
a<
b<
1, P(a<
U<
b) = b - a. Thus,(4.3)
i=l i=l
so that X has the desired distribution. Note that if the xi are ordered so that xo
<
x1< ...
and if F denotes the distribution function of X , then F ( x k ) = C:=opi and soX equals to xj if F(xj-1) 5 U
<
F ( z j ) .That is, after generating U , we determine the value of X by finding the interval [ F ( z j - l ) , F ( z j ) ) in which U lies. This also means that we want t o find the inverse of F ( U ) and thus the name of inverse transform.
Example 4.1 Suppose we want to generate a binomial random variable X with parameters n and p.
The probability mass function of X is given by
p)"--i, i = 0,1,.
. .
, n.From this probability mass function, we see that n-i p
Pi+l = 7- 2 + 1 1 - p Pi, The algorithm goes as follows:
1. Generate U
2. If U
<
PO, set X = 0 and stop3. I f p o < U < p o + p l , s e t X = l a n d s t o p
4. If po
+ . .
.+
p,-1 < U<
po+ .
. .+
pn, set X = n and stopRecursively, by letting i be the current value of X , pr = pi = P ( X = i), and F = F ( i ) = P ( X 5 i ) , the probability that X is less than or equal to i, the above algorithm can be succinctly written as:
STEP 1: Generate U
STEP 2: c = p / ( l - p ) , i = 0, pr = (1 - p)", F = pr STEP 3: If U < F , set X = i and stop
52 GENERATING RANDOM VARIABLES
STEP 5: Go to Step 3 0
To generate a a binomial random variable X with parameters n = 10 and p = 0.7 in SPLUS, type:
n <- 10 p <- 0.7
U <- runif(i,O,i)
c <- p/(l-p>
i <- 0
pr <- (I-p>-n
f <- pr
for (i in 0:n)
c
if (U < f>X
c
<- i break3
elsec
pr <- c*(n-i)/(i+l)*prf <- f+pr
1 3
X
4.4 ACCEPTANCE-REJECTION M E T H O D
In the preceding example, we see how the inverse transform can be used to gen- erate a known discrete distribution. For most of the standard distributions, we can simulate their values easily by means of standard built-in routines available in standard packages. But when we move away from standard dis- tributions, simulating values become more involved. One of the most useful methods is the acceptance-rejection algorithm.
Suppose we have an efficient method, e.g., a computer package, to simulate a random variable Y having probability mass function { q j , j 2 0). We can use this as a basis for simulating a distribution X having probability mass function { p j , j 2 0 ) by first simulating Y and then accepting this simulated value with a probability proportional to p y l q y . Specifically, let c be a constant such that
5 c for all j such that p j
>
0.93
ACCEPTANCE- REJECTION METHOD 53 Then we can simulate the values of X having probability mass function pj = P ( X = j ) as follows:
STEP 1: Simulate the value of Y from q j STEP 2: Generate a uniform random number U
STEP 3: If U
<
p y / ( c q y ) , set X = y and stop. Otherwise, go t o Step 1.Theorem 4.1 The acceptance-rejection algorithm generates a random vari-