• Tidak ada hasil yang ditemukan

Stochastic processes and probability laws

Dalam dokumen Graduate Texts in Mathematics 261 (Halaman 68-73)

Let (E,E) be a measurable space. LetT be an arbitrary set, countable or uncountable. For each t in T, let Xt be a random variable taking values in (E,E). Then, the collection{Xt: t ∈T} is called a stochastic process with state space (E,E) andparameter set T.

For each ω in Ω, let X(ω) denote the functiont Xt(ω) from T into E; then, X(ω) is an element of ET. By Proposition I.6.27, the mapping X: ω→X(ω) from Ω intoET is measurable relative toHandET. In other words, we may regard the stochastic process {Xt : t T} as a random variableX that takes values in the product space (F,F) = (ET,ET).

The distribution of the random variableX, that is, the probability mea- sureP◦X1on (F,F), is called theprobability law of the stochastic process {Xt: t∈T}.

Recall that the productσ-algebraFis generated by the finite-dimensional rectangles and, therefore, a probability measure on (F,F) is determined by the values it assigns to those rectangles. It follows that the probability law ofX is determined by the values

P{Xt1 ∈A1, . . . , Xtn∈An} 1.10

withnranging overN, andt1, . . . , tn overT, andA1, . . . , An overE. Much of the theory of stochastic processes has to do with computing integrals con- cerningX from the given data regarding1.10.

Examples of distributions

The aim here is to introduce a few distributions that are encountered often in probabilistic work. Other examples will appear in the exercises below and in the section next.

1.11 Poisson distribution. Let X be a random variable taking values in N = {0,1, . . .}; it is to be understood that the relevant σ-algebra on N is the discrete σ-algebra of all subsets. Then, X is said to have the Poisson distribution with meancif

P{X=n}=ec cn

n! , n∈N.

Here,c is a strictly positive real number. The corresponding distribution is the probability measureμonNdefined by

μ(A) =

nA

eccn

n! , A⊂N.

1.12 Exponential distributions. Let X be a random variable with values in R+; the relevant σ-algebra on R+ is B(R+). Then, X is said to have the exponential distribution with scale parameterc if its distribution μhas the form

μ(dx) =dx cecx, x∈R+,

wheredxis short for Leb(dx). Here,c >0 is a constant, and we used the form I.5.8to displayμ. In other words,μis absolutely continuous with respect to the Lebesgue measure onR+and its density function isp(x) =cecx,x∈R+. Whenc= 1, this distribution is called the standard exponential.

1.13 Gamma distributions.LetX be a random variable with values inR+. It is said to have the gamma distribution with shape indexaand scale parameter cif its distributionμhas the form

μ(dx) =dx caxa1ecx

Γ(a) , x∈R+.

Here,a >0 andc >0 are constants and Γ(a) is the so-called gamma function.

The last is defined so thatμis a probability measure, that is, Γ(a) =

ˆ

0

dx xa1ex.

Incidentally, the density function for μ takes the value + at x = 0 if a <1, but this is immaterial since Leb{0} = 0; or, in probabilistic terms, X R+ = (0,∞) almost surely, and it is sufficient to define the density on R+. In general, Γ(a) = (a−1)Γ(a−1) fora > 1. This allows one, together with Γ(12) =

π and Γ(1) = 1, to give an explicit expression for Γ(a) when a > 0 is an integer or half-integer. In particular, when a = 1, the gamma distribution becomes the exponential; and whenc= 12 and a= n2 for some integern 1, it is also called the Chi-square distribution with n degrees of freedom. Finally, when c = 1, we call the distribution standard gamma distribution with shape indexa.

1.14 Gaussian distributions. Let X be a real-valued random variable. It is said to have the Gaussian (or normal) distribution with meanaand variance b if its distributionμhas the form

μ(dx) =dx 1

2πb e(xa)2/2b, x∈R.

Here,a∈Rand b > 0, both constant. Ifa= 0 and b = 1, then μis called the standard Gaussian distribution.

Sec. 1 Probability Spaces and Random Variables 55 1.15 Independent gamma variables. Letγa denote the standard gamma dis- tribution with shape indexa; this is the probability measure μof Example 1.13 above but withc = 1. Let X have the distribution γa, andY the dis- tributionγb; herea >0 andb >0. Suppose thatX andY are independent.

Then, the joint distribution ofX andY is the product measureγa×γb, that is, the distribution of the pair (X, Y) is the probability measureπonR+×R+ given by

π(dx, dy) =γa(dx)γb(dy) =dx dy exxa1

Γ(a) ·eyyb1 Γ(b) .

1.16 Gaussian with exponential variance.LetX andY be random variables taking values inR+andRrespectively. Suppose that their joint distribution πis given by

π(dx, dy) =dx dy cecx 1

2πx ey2/2x, x∈R+, y∈R.

Note thatπ has the formπ(dx, dy) =μ(dx)K(x, dy), whereμ is the expo- nential distribution with scale parameterc, and for eachx, the distribution B→K(x, B) is Gaussian with mean 0 and variancex. Indeed,K is a tran- sition kernel fromR+ intoR, andπis an instance of the measure appearing in Theorem I.6.11. It is clear that the marginal distribution ofX is the expo- nential distributionμ. The marginal distributionνofY has the formν =μK introduced in Theorem I.6.3:

ν(B) =π(R+×B) = ˆ

R+

μ(dx)K(x, B), B∈BR.

It is seen easily thatν is absolutely continuous with respect to the Lebesgue measure onR, that is, ν has the form ν(dy) = dy·n(y), and the density function is

n(y) = ˆ

0

dx cecx ey2/2x

2πx =12 b eb|y|, y R,

with b =

2c. Incidentally, this distribution ν is called the two-sided ex- ponential distribution with parameter b. Finally, we note that π is not the productμ×ν, that is,X andY are dependent variables.

Exercises and complements

1.17Distribution functions. Let X be a random variable taking values in R¯ = [−∞,+]. Let μ be its distribution, and c its distribution function, defined by 1.4. Then, c is a function fromR into [0,1]. It is increasing and right-continuous as indicated in Exercise I.5.14.

a) Sincecis increasing, the left-hand limit c(x−) = lim

yxc(y) exists for everyxinR. Similarly, the limits

c(−∞) = lim

x↓−∞ c(x) c(+) = lim

x↑∞c(x) exist. Show that

c(x−) =P{X < x}, c(x)−c(x−) =P{X =x}

c(−∞) =P{X =−∞}, c(+) =P{X <∞}= 1P{X=∞}. b) LetDbe the set of all atoms of the distributionμ. Then,Dconsists of allx inR for whichc(x)−c(x−)>0, plus the point−∞if c(−∞)>0, plus the point + if c() < 1. Of course, D is countable. Define Dx = D∩(−∞, x] and

a(x) =c(−∞) +

yDx

[c(y)−c(y−)], b(x) =c(x)−a(x)

forxin R. Then,ais an increasing right-continuous function that increases by jumps only, andbis increasing continuous. Show thatais the distribution function of the measure

μa(B) =μ(B∩D), B∈B( ¯R),

andb is the distribution function of the measureμb=μ−μa. Note thatμa

is purely atomic andμb is diffuse. The random variableX is almost surely discrete if and only ifμ=μa, that is,a=c.

1.18Quantile functions. LetX be real-valued, let cbe its distribution func- tion. Note that, then,c(−∞) = 0 andc(+) = 1. Suppose thatc is contin- uous and strictly increasing, and letqbe the functional inverse ofc, that is, q(u) =xif and only if c(x) =uforuin (0,1). The function q: (0,1)R is called the quantile function ofX since

P{X ≤q(u)}=u , u∈(0,1).

LetU be a random variable having the uniform distribution on (0,1), that is, the distribution ofU is the Lebesgue measure on (0,1). Show that, then, the random variableY =q◦U has the same distribution as X. In general, Y =X.

1.19Continuation. This is to re-do the preceding exercise assuming that c: R[0,1] is only increasing and right-continuous. Letq: (0,1)R¯ be the right-continuous functional inverse ofc, that is,

q(u) = inf{x∈R: c(x)> u}

Sec. 2 Expectations 57 with the usual conventions that infR = −∞, inf = +. We call q the quantile function corresponding tocby analogy with the preceding exercise.

Recall from Exercise I.5.13 thatqis increasing and right-continuous, and that cis related toq by the same formula with whichq is related toc. Note that qis real-valued if and only ifc(−∞) = 0 andc(+) = 1. See also Figure1.

Show thatc(x−)≤uif and only ifq(u)≥x, and, by symmetry,q(u−)≤x if and only ifc(x)≥u.

1.20Construction of probability measures on. Let c be a cumulative dis- tribution function, that is,c: R[0,1] is increasing and right-continuous.

Letq: (0,1)R¯ be the corresponding quantile function. Letλdenote the Lebesgue measure on (0,1) and put μ = λ◦q1. Show that μ is a proba- bility measure on ¯R. Show thatμ is the distribution on ¯Rcorresponding to the distribution functionc. Thus, to every distribution functionconRthere corresponds a unique probability measureμon ¯Rand vice-versa.

1.21Construction of random variables. Letμbe a probability measure on ¯R. Then, there exists a probability space (Ω,H,P) and a random variable X : ΩR¯ such thatμis the distribution ofX: Take Ω = (0,1),H=B(0,1), P= Leb, and defineX(ω) =q(ω) forωin Ω, whereqis the quantile function corresponding to the measureμ (via the cumulative distribution function).

See Exercise I.5.15 for the extension of this construction to abstract spaces.

This setup is the theoretical basis of Monte-Carlo studies.

1.22Supplement on quantiles.Literature contains definitions similar to that in 1.19 forq, but with slight differences, one of the popular ones being

p(u) = inf{x∈R: c(x)≥u}, u∈(0,1).

Some people prefer supremums, but there is nothing different, since q(u) = sup{x : c(x) u} and p(u) = sup{x: c(x) < u}. In fact, there is close relationship between p and q: we have p(u) = q(u−) = limvuq(v). The functionqis right-continuous, whereaspis left-continuous. We preferqover p, becauseq andc are functional inverses of each other. Incidentally, in the constructions of 1.20 and 1.21 above, the minor difference between pand q proves unimportant: Sinceqis increasing and right-continuous,p(u) =q(u−) differs from q(u) for at most countably many u; therefore, Leb{u: p(u)= q(u)}= 0 and, hence,λ◦q1=λ◦p1withλ= Leb on (0,1).

2 Expectations

Throughout this section (Ω,H,P) is a probability space and all random variables are defined on Ω and take values in ¯R, unless stated otherwise.

Let X be a random variable. Since it isH-measurable, its integral with respect to the measureP makes sense to talk about. That integral is called theexpected value ofX and is denoted by any of the following

1 0

EX

PX

X

Figure 2:The integralPX is the area underX, the expected valueEX is the constant “closest” toX.

EX = ˆ

ΩP()X(ω) = ˆ

Ω

X dP=PX.

2.1

The expected valueEX exists if and only if the integral does, that is, if and only if we do not haveEX+=EX = +. Of course,EX exists whenever X≥0, andEX exists and is finite ifX is bounded.

We shall treatEas an operator, the expectation operator corresponding to P, and call EX the expectation of X from time to time. The change in notation serves to highlight the important change in our interpretation of EX: The integral PX is the “area under the function” X in a generalized sense. The expectationEX is the “weighted average of the values” ofX, the weight distribution being specified by P, the total weight being P(Ω) = 1.

See Figure2 above for the distinction.

Except for this slight change in notation, all the conventions and notations of integration are carried over to expectations. In particular,X is said to be integrable if EX exists and is finite. The integral of X over an event H is EX1H. As before with integrals, we shall state most results for positive random variables, because expectations exist always for such, and because the extensions to arbitrary random variables are generally obvious.

Dalam dokumen Graduate Texts in Mathematics 261 (Halaman 68-73)