• Tidak ada hasil yang ditemukan

CONTINUOUS-TIME MARKOV CHAINS

Dalam dokumen Queueing Modelling Fundamentals (Halaman 114-119)

Discrete and Continuous Markov Processes

3.3 CONTINUOUS-TIME MARKOV CHAINS

Having discussed the discrete-time Markov chain, we are now ready to look at its continuous-time counterpart. Conceptually, there is no difference between these two classes of Markov chain; the past history of the chain is still being summarized in its present state and its future development can be inferred from there. In fact, we can think of the continuous-time Markov chain as being the limiting case of the discrete type.

However, there is a difference in mathematical formulation. In a continuous Markov chain, since transitions can take place at any instant of a time contin- uum, it is necessary now to specify how long a process has stayed in a particular state before a transition take place.

In some literature, the term ‘Markov process’ is also used to refer to a con- tinuous-time Markov chain. We use that term occasionally in subsequent chapters.

3.3.1 Defi nition of Continuous-time Markov Chains

The defi nition of a continuous-time Markov chain parallels that of a discrete- time Markov chain conceptually. It is a stochastic process {X(t)} in which the future probabilistic development of the process depends only on its present

CONTINUOUS-TIME MARKOV CHAINS 91

state and not on its past history. Mathematically, the process should satisfy the following conditional probability relationships for t1< t2< . . . < tk:

P X t[ (k+1)= j X t| ( )1 =i X t1, ( )2 =t2, . . .X t( )k =ik]=P X t[ (k+1)= j X t| (kk)=ik] (3.23) From the early discussion, we know that in the treatment of continuous-time Markov chains we need to specify a transition scheme by which the system goes from one state to another. The transition probability alone is not suffi cient to completely characterize the process. Instead of dwelling on its formal theo- retical basics which can be quite involved, we will take the following defi nition as the point of departure for our subsequent discussions and develop the neces- sary probability distribution. And again, we will focus our attention on the homogeneous case.

Defi nition 3.1

For a continuous Markov chain which is currently in state i, the probability that the chain will leave state i and go to state j(ji) in the next infi nitesimal amount of time ∆t, no matter how long the chain has been in state i, is

pij(t, t +∆t) = qijt

where qij is the instantaneous transition rate of leaving state i for state j. In general, qij is a function of t. To simply the discussion, we assume that qij is independent of time; in other words, we are dealing with a homogeneous Markov chain. The total instantaneous transition rate at which the chain leaves state i is, therefore,

j i

qij

.

3.3.2 Sojourn Times of a Continuous-time Markov Chains

Analogous to the discrete-time Markov chain, the sojourn times of a continu- ous-time Markov chain is the time the chain spends in a particular state. From the earlier discussion of Markov chains, we know that the future probabilistic development of a chain is only related to the past history through its current position. Thus, the sojourn times of a Markov chain must be ‘memoryless’ and are exponentially distributed, as exponential distribution is the only continuous probability distribution that exhibits such a memoryless property.

We shall demonstrate that the sojourn times of a continuous-time Markov chain are indeed exponentially distributed using the above defi nition. We assume the chain has just entered state i, and will remain in that during an interval [0, t].

Let ti be the random variable that denotes the time spent in state i. If we divide the time t into k equal intervals of ∆t, such that kt = t, then for the sojourn time ti to be greater than t, there should not be any transition in any of these ∆t intervals. From the above defi nition, we know that the probability of not having a transition in a time interval ∆t is

1−





j i

qijt

Therefore, the probability that the sojourn is greater than t

P t q t

q t k

i k

j i ij

k

k j i

ij k

[τ > =]  −

 



=  −

 



=

→∞

→∞

lim lim

1 1

eeq ti (3.24)

where qi q

j i

= − ij

. Hence, the sojourn time between transitions is given by

P i t P i t e

q ti

[τ ≤ = −] 1 [τ > = −] 1 (3.25) which is an exponential distribution.

3.3.3 State Probability Distribution

We will now turn our attention to the probability of fi nding the chain in a par- ticular state – state probability. As usual, we defi ne pj(t) = P[X(t) = j] and consider the probability change in an infi nitesimal amount of time ∆t:

πj π π

i j

i ij j

k j

t t t q t t qjk t

( + )= ( ) + ( ) −

 



∑ ∑

∆ ∆ 1 ∆ (3.26)

The fi rst term on the right is the probability that the chain is in state i at time t and makes a transition to state j in ∆t. The second term is the probability that the chain is in state j and does not make a transition to any other states in ∆t.

Rearranging terms, dividing Equation (3.20) by ∆t and taking limits, we have d

dt j t q q

i j

i ij j

k j

π ( )= π −π jk

∑ ∑

(3.27)

CONTINUOUS-TIME MARKOV CHAINS 93

If we defi ne qjj q

k j

= − ik

then the above expression can be re-written as d

dt jt t q

i

i ij

π ( )=

π( ) (3.28)

Let us defi ne the following three matrices:

π( )t =( ( )π1t2( )t , . . .) (3.29) d

dt t d dt t d

dt t

π( )=  π1( ), π2( ), . . .

)

(3.30)

Q q

q q q

q q q

q q

ij j

j

j j

n

j n nj

= =





( )

1

1 12 13

21

2

2 23

1









(3.31)

We can then re-write Equation (3.27) or (3.28) in matrix form as d

dtπ( )t =π( )t Q (3.32) The matrix is known as the infi nitesimal generator or transition-rate matrix as its elements are the instantaneous rates of leaving a state for another state. Recall from the section on ‘Eigenvalues, Eigenvectors and Spectral Representation’ in Chapter 1, one of the general solutions for the above matrix equation is given by

π t e I Q t

k

Qt

k k

k

( ) = = +

( )

=

! 1

Similar to the discrete-time Markov chains, the limiting value of the state probability π= π

lim→∞

t ( )t exists for an irreducible homogeneous continuous- time Markov chain and is independent of the initial state of the chain. That implies that d

dtπ( )t =0 for these limiting values. Setting the differential of the state probabilities to zero and taking limit, we have

0=πQ (3.33)

where p˜ = (p1, p2, . . .) and is the transition-rates matrix defi ned in Equation (3.31).

We can see the distinct similarity in structure if we compare this equation with that for a discrete-time Markov chain, i.e. p˜ = p˜ P˜. These two equations uniquely describe the ‘motion’ of a Markov chain and are called stationary equations. Their solutions are known as the stationary distributions.

3.3.4 Comparison of Transition-rate and Transition-probability Matrices

Comparing the transition rate matrix (3.31) defi ned in Section 3.3.3 with the transition probability matrix (3.7) of Section 3.2.2, we see some similarities as well as distinctions between them.

First of all, each of these two matrices completely characterizes a Markov chain, i.e. for a continuous Markov chain and for the discrete counterpart.

All the transient and steady-state probabilities of the corresponding chain can in principle be calculated from them in conjunction with the initial probability vector.

The main distinction between them lies in the fact that all entries in are transition rates whereas that of are probabilities. To obtain probabilities in , each entry needs to be multiplied by ∆t, i.e. qijt.

Secondly, each row of the matrix sums to zero instead of one, as in the case of matrix. The instantaneous transition rate of going back to itself is not defi ned in the continuous case. It is taken to be qjj q

k j

= − jk

just to place it in the similar form as the discrete case. In general, we do not show self-loops on a state transition diagram as they are simply the negative sum of all rates leaving those states. On the contrary, self-loops on a state transition diagram for a discrete-time Markov chain indicate the probabilities of staying in those states and are usually shown on the diagram. Note that and are related in the following expression:

d

dtP t( )=P t Q( )=Q t P t( ) ( )

It is not surprising to see the similarity in form between the above expression and that governing the state probabilities because it can be shown that the fol- lowing limits always exist and are independent of the initial state of the chain for an irreducible homogeneous Markov chain:

limt p tij j

→∞ ( )=π

and lim

t jt j

→∞π ( )=π

CONTINUOUS-TIME MARKOV CHAINS 95

Dalam dokumen Queueing Modelling Fundamentals (Halaman 114-119)