• Tidak ada hasil yang ditemukan

Lecture-09: Limit Theorems

N/A
N/A
Protected

Academic year: 2025

Membagikan "Lecture-09: Limit Theorems"

Copied!
5
0
0

Teks penuh

(1)

Lecture-09: Limit Theorems

1 Growth of renewal counting processes

Lemma 1.1. Consider the counting process N:Ω→ZR++ associated withi.i.d.inter-renewal time sequence X:Ω→RN+ with finite meanEXn<∞. Let N≜limt→Nt, then P{N=}=1.

Proof. It suffices to showP{N<}=0. SinceE[Xn]<∞, we haveP{Xn=}=0 and P{N<}=P [

n∈N

{N<n}=P [

n∈N

{Sn=}=P [

n∈N

{Xn=}⩽

n∈N

P{Xn=}=0.

Corollary 1.2. For delayed renewal processes with finite mean of first renewal instant and subsequent inter- renewal times, P

limt→NtD= =1.

1.1 Strong law for renewal processes

We observed that the number of renewalsNtincreases to infinity with the length of the durationt. We will show that the growth ofNtis asymptotically linear with timet, and we will find this coefficient of linear growth ofNtwith timet.

Theorem 1.3 (Strong law). For a renewal counting process N:Ω →ZR++ with i.i.d. inter-renewal times X:Ω→RN+ having a finite meanµ, we have

t→lim

Nt

t = 1

µalmost surely.

Proof. Note thatSNt represents the time of last renewal beforet, andSNt+1represents the time of first renewal after timet. Clearly, we haveSNt⩽t<SNt+1. Dividing byNt, we get

SNt

Ntt Nt

< SNt+1

Nt . (1)

SinceNtincreases monotonically to infinity astgrows large, we can apply strong law of large numbers to the sumSNt=Ni=1t Xi, to get limt→SNt

Nt =µalmost surely. Hence the result follows.

Corollary 1.4. For a delayed renewal process with finite inter-arrival durations,limt→ND(t) t = µ1

F.

Example 1.5. Suppose, you are in a casino with infinitely many games. We assume thatX:Ω→ [0, 1]N is ani.i.d.uniform sequence whereXiis the random probability of win in the gamei∈N.

One can continue to play a game or switch to another one. We are interested in a strategy that maximizes the long-run proportion of wins. LetN(n)denote the number of losses innplays. Then the fraction of winsPW(n)is given byPW(n) =n−N(n)n .

We pick a strategy where any game is selected to play, and continue to be played till the first loss. We show that limn→PW(n) =1 for this proposed strategy. LetTibe the number of times a gameiis played. We observe that the conditional probability mass function for the number of plays for each gameiis geometrically distributed as

E[1{Ti=k}|σ(Xi)] =Xik−1(1−Xi), k∈N.

(2)

t Nt

S1 S2 SNs s SNs+1

1 2 3 4

Figure 1: Time of last renewal

Hence, it follows thatTiarei.i.d.random variables with meanETi=E[E[Ti|Xi]] =Eh

1 1−Xi

i

=∞.

It follows that each loss is a renewal event, and from the strong law of renewal process, we obtain

n→lim

N(n)

n = 1

E[Time till first loss] = 1 ETi

=0.

1.2 Wald’s lemma for renewal processes

Basic renewal theorem implies Ntt converges to 1µalmost surely. We are next interested in convergence of the ratio mtt. Note that this is not obvious, since almost sure convergence doesn’t imply convergence in mean. To illustrate this, we have the following example.

Example 1.6. Consider a Bernoulli random sequenceX:Ω→ {0, 1}Nwith probabilityP{Xn=1}=

1

n, and another random sequenceY:Ω→ZN+ defined asYn≜nXn forn∈N. Then,P{Yn=0}= 1−1n. That isYn→0 a.s. However,E[Yn] =1 for alln∈N. SoE[Yn]→1.

Even though, basic renewal theorem doesNOTimply it, we still have mtt converging to µ1. We first need this technical Lemma.

Proposition 1.7 (Wald’s Lemma for renewal process). Let m:R+R+ be the renewal function for a renewal counting process N:Ω→ZR++ with i.i.d.inter-arrival times X:Ω→RN+ having finite meanµ= E[X1]<∞. Then, Nt+1is a stopping time for the sequence X, and

E

"N

t+1 i=1

Xi

#

=µ(1+mt).

Proof. We observe that for anyn∈N, the event{Nt+1=n}belongs toσ(X1, . . . ,Xn), since {Nt+1=n}={Sn−1⩽t<Sn}=

(n−1

i=1

Xi⩽t<

n−1

i=1

Xi+Xn

)

σ(X1, . . . ,Xn).

Thus Nt+1 is a stopping time with respect to the random sequenceX, and the result follows from Wald’s Lemma.

Theorem 1.8 (Elementary renewal theorem). For a renewal process with finite mean inter-arrival times, the renewal function satisfies

m 1

(3)

Proof. By the assumption, we have meanµ<∞. Further, we know thatSNt+1>t. Taking expectations on both sides and using Proposition 1.7, we haveµ(mt+1)>t. Dividing both sides byµtand taking lim inf on both sides, we get

lim inf

t→

mt

t ⩾ 1 µ.

We employ a truncated random variable argument to show the reverse inequality. We define trun- cated inter-renewal times ¯X:Ω→[0,M]Ndefined as ¯Xn≜Xn∧Mfor eachn∈N, and with common mean denoted byµM. SinceXisi.i.d., so is the truncated sequence ¯X, and hence we can define the corresponding renewal sequence ¯S:Ω→RN+ and the counting process ¯N:Ω→ZN+ defined as

n

n i=1

i, n∈N, and ¯Nt

n∈N

1{S¯n⩽t}, tR+.

Note that sinceSn⩾S¯n, the number of arrivals would be higher for renewal process ¯Ntwith truncated random variables. That is,Nt⩽N¯t, and hencemt⩽m¯tfrom the monotonicity of expectation. Further, due to truncation of inter-arrival time, next renewal happens within Munits of time, that is ¯SN¯t+1⩽ t+M. From the monotonicity of expectation and Wald’s Lemma for renewal processes, we get

(1+m¯t)µM⩽t+M.

Dividing both sides bytµMand the fact thatmt⩽m¯tfor all timest∈R+, we obtain lim sup

t→

mt

t ⩽lim sup

t→

¯ mt

t ⩽ 1 µM. The result follows from recognizing that limM→µM=µ.

Corollary 1.9. For a delayed renewal process with finite inter-arrival durations, we havelimt→mD(t)

t = 1

µF.

Example 1.10 (Markov chain). Consider a positive recurrent discrete time Markov chainX:Ω→ XN taking values in a discrete setX⊂R. Let the initial state be X0=x∈Xand τy+(0) =0 for y̸=x∈X, then we can inductively define thenth recurrent time to stateyas a stopping time

τy+(n) =infn

k>τy+(n−1):Xk=yo .

Since any discrete time Markov chain satisfies the strong Markov property, it follows thatτy+:Ω→ NNform a delayed renewal process with the first arrival distributionPx

n

τy+(1) =ko

= fxy(k), and the common distribution of the inter-arrival durationXn,n⩾2 in terms of first return probability as

Py

n

τy+(1) =ko

=fyy(k),k∈N.

We denote the associated counting process by Ny:Ω→ZN+, where Ny(n) =i∈N1{τy+(i)⩽n}=

nk=11{Xk=y}denotes the number of visits to stateyup to timen. Letµyy=Eyτy+(1)be the finite mean inter-arrival time for the renewal process, also the mean recurrence time to statey. From the strong law for delayed renewal processes it follows that

Py

n∈limN

Ny(n)

n = 1

µyy

=1.

Since Ny(n) is number of visits to state y in first n time steps, we have ExNy(n) =

nk=1Px{Xk=y}=nk=1p(k)xy From the basic renewal theorem for delayed renewal process it fol- lows that

n∈limN

nk=1p(k)xy

n = lim

n∈N

Ex[Ny(n)]

n = 1

µyy.

(4)

1.3 Central limit theorem for renewal processes

Theorem 1.11. For a renewal process with inter-arrival times having finite meanµand finite varianceσ2, the associated counting process converges to a normal random variable in distribution. Specifically,

t→limP

 Ntµt

σ q t

µ3

<y

= √1 2π

Z y

ex

2 2 dx.

Proof. Takeu= µt +yσq t

µ3. We shall treatuas an integer and proceed, the proof for generaluis an exercise. Recall that{Nt<u}={Su>t}. By equating probability measures on both sides, we get

P{Nt<u}=P

Su−uµ σ

√u > t−uµ σ

√u

=P

(Su−uµ σ

√u >−y

1+√yσ tµ

−1/2) . By central limit theorem, Su−uµ

σ

u converges to a normal random variable with zero mean and unit variance astgrows. We also observe that

t→lim−y

1+√yσ tu

−1/2

=−y.

These results combine with the symmetry of normal random variable to give us the result.

2 Patterns

LetX:Ω→XNbe ani.i.d.sequence with common probability mass function p∈ M(X). We denote the natural filtration of process X byF≜(Fn:n∈N)whereFnσ(X1, . . . ,Xn)for all n∈N. Let x= (x1, . . . ,xm)∈Xmbe a pattern and inductively definenth hitting times of the patternxasS0x≜0 and

Sxn≜inf

n>Sxn−1:Xn=xm,Xn−1=xm−1, . . . ,Xn−m+1=x1 .

It is easy to check that Snx is adapted toF and one can verify that Snx is almost surely finite for all n∈N. It follows thatSxis a sequence of stopping times adapted toF. SinceXisi.i.d., it follows that Sx:Ω→RN+ is a delayed renewal sequence with inter-renewal durationsTnx≜Sxn−Sn−1x beingi.i.d.for n⩾2 and independent ofT1x.

2.1 Hitting time to pattern ( 1 )

First we consider the simplest example when the alphabetX={0, 1}, with the common meanEX1=p, and the patternx= (1). One way to solve this problem is to considerS11as a random variable and find its distribution. We can write

Pn

S11=ko

=p¯k−1p.

We observe thatS11 is a geometric random variable of the time to first success, with its mean as the reciprocal ofi.i.d. success probabilityp. An alternative way to solve this is via renewal function ap- proach. Recall that

S11=1 ={X1=1}andS111{X1=0}= (1+S11)1{X0=0}in distribution whereS11is independent ofX0. The result follows from writing

ES11=ES111{S11>1}+ES111{S11=1}=pE¯ (1+S11) +p=1+pE11.

2.2 Hitting time to pattern ( 0, 1 )

For the alphabet X={0, 1} with common meanEX1= p, we consider the two length pattern x = (0, 1), then S1x=inf{n∈N:Xn=1,Xn−1=0}. We can again model this hitting time as a random variable, however directly finding the distribution ofSxis slightly more complicated. We next attempt the renewal function approach. Recall thatSx1 is independent ofX0, and the following equality holds in distributionSx11{X1=1}= (1+Sx1)1{X0=1}. In addition, the following equality holds in distribution Sx11{X1=0,X2=0}= (1+S1x)1{X1=0,X0=0}. Hence, we can write

(5)

We recognize that the second term on the right hand side can be written as

ES1x1{X2=0,X1=0}=pE¯ (1+Sx1)1{X1=0}=p¯2+pE1x1{X1=0}=p¯2+pEx1−ppE¯ (1+S1x). Combining the above two results, we can write

ES1x=2pp¯+p¯2+pEx1+p2E(1+S1x) =1+ (p¯+p2)ES1x. It follows thatESx1= p1p¯.

2.3 Hitting time to pattern x

For ani.i.d.sequenceX:Ω→XN, a general approach is to modelXmn = (Xn,Xn−1,Xn−m+1)∈Xmas an m-dimensional time homogeneous irreducible positive recurrent Markov chain. DefiningY≜XmWe are interested in the mean hitting time to statex of the joint processXm:Ω→YN. It follows that the successive times for processXmto hit a patternx∈Yis a delayed renewal process in general. Defining the on times when Xm hits x, it follows from the strong law for renewal processes that the average number of visits to statexis the reciprocal of mean inter-renewal duration. That is,

n→lim

1 N

N n=1

1{Xmn=x}= lim

n→

1 N

N n=1

1{Xn=xm,...,Xnm+1=x1}=

m i=1

pxi= 1 ETkx.

For each patternx∈Y, we define initial sub-patternsxk≜(x1, . . . ,xk)fork∈[m]. If the initial sub-pattern is not one of the final sub-patterns, i.e. (x1, . . . ,xk)̸= (xm−k+1, . . . ,xm)for anyk∈[m], then we observe thatSxis a renewal sequence andES1x= m1

i=1pxi.

Example 2.1. Consider patterns(1)and(01)fori.i.d.Bernoulli sequenceX:Ω→ {0, 1}Nwith common meanEX1=p. Clearly,(1)has no sub-pattern and henceES11=1p. Similarly,(01)has a sub-pattern(0) but(00)is not a sub-pattern of(01)and henceES011 = pp¯1.

If there exists a non empty I⊆[m]such that for each k∈I there is an initial sub-patternxk such that(xk) = (xmm−k+1)is a final sub-pattern ofx, then the mean hitting time to patternxis equal to the telescopic sum of mean hitting time to sub-patterns That is, denotingI≜{i1, . . . ,ik}, we can write

ESx1=

k j=1

ExijSx1ij+1.

The mean time duration between two successive hits toxij+1isET2xij+1= 1

ijℓ=+11px

. This is the same mean time fromxij toxij+1. Therefore,

ESx1=

k j=1

1

iℓ=1j+1 px

.

Example 2.2. Consider pattern (101) and (1011) for i.i.d. Bernoulli sequence X:Ω → {0, 1}N with common meanEX1=p. Pattern(101)has sub-patterns(1)and(10), where(1)appears at the end as well. Therefore,

ES(101)1 =ES11+E1S1011 = 1 p+ 1

p2p¯.

Pattern(1011)has sub-patterns(1),(10),(101), where(1)appears at the end. Thus, Therefore, ES1(101)=ES11+E1S10111 = 1

p+ 1 p3p¯.

Referensi

Dokumen terkait