7/25/2003
Lecture #5
Markov Processes
ดร.อนันตผลเพิ่ม
Anan Phonphoem, Ph.D.
anan@cpe.ku.ac.th http://www.cpe.ku.ac.th/~anan Computer Engineering Department Kasetsart University, Bangkok, Thailand
7/25/2003
2
Outline
zMarkov Processes
zDiscrete Time Markov Chain
zHomogeneous, Irreducible,
Transient/Recurrent, Periodic/Aperiodic
zErgodic
zStationary Probability
zTransient Behavior
zBirth-Death Process
7/25/2003
3
Markov Processes
z X(t) is a Markov Process if it satisfies the Markov (Memoryless) Property
z X(t) only depends upon the current state
z The past history is summarized in the current state = P{X(tn+1) = xn+1| X(tn) = xn}
P{X(tn+1) = xn+1| X(tn) = xn,X(tn-1) = xn-1 ,…, X(t1) = x1}
Where t1< t2< … < tn-1< tn< tn+1
7/25/2003
4
From Markov Processes …
zDiscrete Time Markov Process: State changes occur at integer points
zContinuous Time Markov Process: State changes occur at arbitrarily time
7/25/2003
5
From Markov Processes …
z Markov Chain: Discrete state space Markov Process
z Discrete Time Markov Chain: State (Discrete State) changes occur at integer points
z Continuous Time Markov Chain: State (Discrete State) changes occur at arbitrarily time
7/25/2003
6
Discrete Time Markov Chains
zOne can stay in a Discrete state (position)
and is permitted to change state at Discrete
7/25/2003
7
Discrete Time Markov Chains
= P{Xn= j | Xn-1= in-1} Where n = 1,2,3,… P{Xn= j | X1= i1 , X2= i2 ,…, Xn-1= xn-1}
z Xn: The system is in state jat time n
z The system can begin at state 0with initial probabilityP[X0= x]
zP{Xn= j | Xn-1= in-1} is theone-step transition probability
7/25/2003
8
Discrete Time Markov Chains
zFrom initial probabilityand one-step transition
probability,
zwe can find probability of being in various
states at time n
7/25/2003
9
Homogeneous Markov Chain
zIf transition probabilities are independent of
n, it is called Homogeneous Markov Chain.
zLet pij≡P[Xn= j | Xn-1= i ]
zWe are in state iand going to be in state jin
the next step
zThe state transition prob. will only depend
on the initial probabilityand transition
probability, regardless of transition time.
7/25/2003
10
Homogeneous Markov Chain
zm-step transition probabilities are
(m)
pij≡P[Xn+m= j | Xn= i ]
= ∑pik pkj m = 2,3,… (m-1)
∀k
i
…
m -1k
j
Homogeneous Markov Chain
i
k
…
m -1j
(m)
pij≡P[Xn+m= j | Xn= i ]
= ∑pik pkj m = 2,3,… (m-1)
∀k
Irreducible Markov Chain
zA Markov Chain isirreducibleif every state
can be reached from every other state in a
finitenumber of steps.
7/25/2003
13
Not Irreducible Markov Chain
zCase 1
Not Irreducible Markov Chain
zCase 2
–For A = set of all states in a Markov chain
–A1⊂A
–If A1consists of one or more state Eithat once
get in state Ei, the process cannot move to any
other states
–Eiis called “Absorbing State”
–pii= 1
7/25/2003
15
Transient or Recurrent States
z fj(n)= P[the process first returns to state jafter
Transient or Recurrent States
zIf fj< 1
–State Ejis called “Transient State” zIf fj= 1
–State Ejis called “Recurrent State”
–If Mj= ∞
zState Ejis called “Recurrent Null State”
–If Mj< ∞
zState Ejis called “Recurrent Nonnull State”
7/25/2003
17
Periodic or Aperiodic
zLet β= integer
zIf the only possible stepsthat the process
returns to state Eiare β, 2β, 3β, …
–Ej= Aperiodicand Recurrent Nonnull
zfj= 1, Mj< ∞, and β= 1
zA Markov Chain is ergodic
–If allstates of a Markov Chain are ergodic
7/25/2003
19
Theorem 1
zThe states of an irreducible Markov Chain
are either
– all transient or
– all recurrent nonnull or
– all recurrent null
zIf periodic, then all states have the same
period β
= P[being in state j at arbitrarily time] = The limiting state probabilities
7/25/2003
21
Theorem 2
zIn an irreducible and aperiodic,
homogeneous Markov Chain,
zthe limiting state probabilities [πj] always
exist and are independent of the initial state
probability distribution [πj(0)]
πj= limπj(n)
– All states are transient or
– All states are recurrent null
Îπj= 0 ∀j
ÎNo stationary distribution exist.
zOr Case (b)
– All states are recurrent nonnull
Îπj> 0 ∀j
Markov Chain Example
zDriving from town to town
7/25/2003
25
Markov Chain Example
zLet P = Transition probability matrix
= [pij]
zLet π= [π0, π1, π2, …]
zFrom Balance equation
π= πP
7/25/2003
26
Markov Chain Example
p01= 3/4
Markov Chain Example
π0 = 0 π0 + 1/4 π1 + 1/4 π2
Markov Chain Example
π0 = 0.20
π1= 0.28
π2= 0.52
Solution:
zThis is the stationary (equilibrium) state
probability
z This is the ergodic Markov Chain
–Finite number of states
–Irreducible
7/25/2003
29
Transient Behavior
zWe want to know the probability of finding
the process in state Ejat time n
zπ(n) = [π
0(n) , π1(n) ,π2(n) , …]
zFrom Transition Probability P
– We can calculate: π(1)= π(0)P
zFrom stationary probability: π= limπ(n)
7/25/2003
zA Markov Process
zHomogeneous, aperiodic, and irreducible
zDiscrete time / Continuous time
zState changes can only happen between
neighbors
7/25/2003
33
Birth-Death Process
zSize of population
– System is in state Ekwhen consists of kmembers
– Changes in population size occur by at most one
– Size increased by one ΓBirth”
– Size decreased by one ΓDeath”
zTransition probabilitiespijdo not change
with time
zαi= death (less one in population size)
zα0 = 0 (no population Æno death)
zλi = birth (increase one in population)
zλi > 0 (birth is allowed)
zPure Birth = no decrement, only increment
zPure Death = no increment, only decrement
Queueing Theory Model
z
Population
= customers in the queueing systemz
Death
= a customer departure from the system7/25/2003
37
Transition matrix
1 -λ0
P =
λ0
α1 1 -λ1-α1 λ1
α2 1 -λ2-α2 λ2
0
0 0
0 0
0 0
0 …
αi 1 -λi-αi λi
0
0
…