www.elsevier.com/locate/orms
Classication of Markov processes of M/G/1 type with a tree
structure and its applications to queueing models
Qi-Ming He
∗Department of Industrial Engineering, DalTech, Dalhousie University, P.O. Box 1000, Halifax, Nova Scotia, Canada, B3J 2X4
Received 1 July 1998; received in revised form 1 July 1999
Abstract
This paper studies the classication problem of Markov processes of M=G=1 type with a tree structure. It is shown that the classication of positive recurrence, null recurrence, and transience of the Markov processes of interest is determined completely by the Perron–Frobenius eigenvalue of a nonnegative matrix. The results are used to nd classication criteria for a number of discrete time or continuous time queueing systems with multiple types of customers. c2000 Elsevier Science B.V. All rights reserved.
Keywords:Markov process; Queueing theory; Tree structure; Positive recurrence; Null recurrence; Transience; Lyapunov function; Mean drift method
1. Introduction
For many queueing systems with multiple types of customers, their associated queueing processes can be formulated into Markov processes of matrix M=G=1 type or matrix GI=M=1 type with a tree struc-ture. Examples of this sort can be found in [6,16,17]. In [16,17], the M=G=1 paradigm and the GI=M=1 paradigm of Markov processes with a tree struc-ture were studied, respectively. Analytic results were obtained for stationary distributions of states and fundamental periods. But the classication of these Markov processes is still unsolved, i.e., conditions
∗Fax: +1-902-420-7858.
E-mail address:[email protected] (Q.-M. He)
for these Markov processes to be positive recurrent, null recurrent, or transient are yet to be found. As was pointed out in [16,17], the classication prob-lem of these Markov processes is interesting and dicult.
This paper studies the classication problem of the Markov process of M=G=1 type with a tree structure, which is a special case of the Markov process of matrix M=G=1 type with a tree structure introduced in [16]. A simple criterion is found for a complete classication of the Markov process of M=G=1 type with a tree structure. That is: the classication of the Markov process of interest is determined completely by the Perron–Frobenius eigenvalue of a nonnegative matrix. The result is obtained by using the mean-drift
method or Foster’s criterion [1,12] and some results from matrix analytic methods [13 –16]. Using the re-sults obtained, the classication problem of a number of discrete time or continuous time queueing systems with batch arrivals and multiple types of customers are solved. Generalizations to the Markov process of matrix M=G=1 type with a tree structure are reported in [5].
The work done by Gajrat et al. [2] is closely re-lated to this paper. Gajrat et al. [2] studied a Markov process of random strings which is a hybrid model of the M=G=1 type and the GI=M=1 type Markov pro-cesses with a tree structure. They obtained necessary and sucient conditions for the Markov process to be positive recurrent, null recurrent, or transient, in terms of positive solutions of a nite system of poly-nomial equations. For the birth-and-death case, they obtained the same simple solution as we obtain in this paper. Malyshey [10,11] studied random strings as Markov processes with a network structure. General classication criteria were obtained and applications to queueing networks were discussed. Diering from the existing work, this paper exploits the special tree struc-ture associated with the Markov process of M=G=1 type and obtains some explicit results.
Results obtained in this paper are used to nd con-ditions to classify some queueing systems with batch arrivals and multiple types of customers in Section 5. Why is it an interesting problem? Is it possible to solve the classication problem of these queueing systems by considering queueing systems with one type of customer? That is: to classify these queueing systems without distinguishing customers of dierent types. A note after Example 5.2 shows that unless appropri-ate parameters are chosen such an approach may fail. It also shows how such an appropriate set of parame-ters can be chosen.
The rest of the paper is organized as follows. In Section 2, a discrete time Markov chain of M=G=1 type with a tree structure is dened. In Section 3, a simple criterion for the classication of the Markov chain of interest is obtained. Section 4 develops simpler criteria for some special cases. In Section 5, criteria for the classication of several queueing systems are developed using the results obtained in Section 3. Finally, in Section 6, the continuous time Markov process of M=G=1 type with a tree structure is studied.
2. Markov chain of M=G=1 type with a tree structure
In this section, a discrete time Markov chain of M=G=1 type with a tree structure is dened. This Markov chain is a special case of the Markov chain of matrix M=G=1 type with a tree structure intro-duced in [16]. Continuous time Markov processes of M=G=1 type with a tree structure will be dened and discussed briey in Section 6.
First, we dene aK-ary tree. AK-ary tree is a tree in which each node has a parent andKchildren, except the root node of the tree. The root node of the tree is denoted as 0. Strings of integers between 1 andKare used to represent nodes of the tree. For instance, the kth child of the root node has a representation of k. Thejth child of nodekhas a representation ofkj. Let
ℵ={J:J=k1k2· · ·kn;16ki6K;16i6n; n ¿0} ∪
{0}. Any stringJ∈ ℵis a node in theK-ary tree. The length of a stringJis dened as the number of integers in the string and is denoted by|J|. WhenJ=0; |J|=0. The following two operations related to strings inℵ
are used in this paper.
Addition operation: for J =k1· · ·kn∈ ℵ and H =
j1· · ·jt∈ ℵ; J+H=k1· · ·kn j1· · ·jt∈ ℵ.
Subtraction operation: forJ=k1· · ·kn∈ ℵandH =
kt· · ·kn∈ ℵ; t ¿0; J−H=k1· · ·kt−1∈ ℵ.
Without loss of generality, no boundary node linked to the root node is considered. The results obtained in this paper can be generalized to cases where boundary nodes do exist.
Consider a discrete time Markov chain{Xn; n¿0}
for whichXn takes values in ℵ. Xn is referred to as
the node of the Markov chain at timen. To be called a (homogenous) Markov chain of M=G=1 type with a tree structure,Xn transits at each step to either the
parent node of the current node or a descendent of the parent node. All possible transitions and their corre-sponding probabilities are given as follows. Suppose thatJ=k1· · ·k|J|andXn=J+k. Then the one step
transition probabilities are given as:
1. Xn+1=J +k+H with probabilitya(J+k; J +
k+H) =a0(k; H) whenH∈ ℵandH ¿0;
2. Xn+1=J+kwith probabilitya(J+k; J+k)=a1(k);
3. Xn+1=J+H with probabilitya(J+k; J+H) =
4. Xn+1=J with probabilitya(J+k; J) =a2(k);
5. Xn+1=Hwith probabilitya(0; H)=a0(0; H) when
Xn= 0; H∈ ℵandH ¿0; and
6. Xn+1 = 0 with probability a(0;0) =a1(0) when
Xn= 0.
Note that transition probabilities depend only on the last integer in the string of the current node. Also note the dierence betweena0(k; H); a1(k; H); a1(k),
and a2(k). Transition probability a1(k; H) is
in-troduced since in discrete time queueing systems (Examples 5.1–5.3), arrivals and service completions can occur simultaneously. We seta1(k; H) = 0 when
H=j1j2· · ·jn∈ ℵandj1=k because the transition
fromJ+ktoJ+His described bya0(k; k+j2· · ·jn).
Transition probabilities satisfy the total law of proba-bility:
X
J∈ℵ; J ¿0
[a0(k; J) +a1(k; J)]
+a1(k) +a2(k) = 1; 16k6K;
X
J∈ℵ; J¿0
a0(0; J) +a1(0) = 1: (2.1)
LetN(J; k) be the number of occurrences of integer k in stringJ; 16k6K. It is easy to see that |J|= N(J;1) +· · ·+N(J; K). Dene
(k; j) = X
J∈ℵ; J ¿0
[a0(k; J) +a1(k; J)]N(J; j);
06k6K; 16j6K; (2.2)
(k) =a2(k) +
X
J∈ℵ; J ¿0
a1(k; J); 16k6K:
Let be a K ×K matrix with the (k; j)th ele-ment(k; j); 16k; j6K, andM aK×Kmatrix with the (k; k)th diagonal element(k) and all other ele-ments zero. We assume that(k)¿0; 16k6K. Let P=M−1and sp(P) be the Perron–Frobenius eigen-value of the nonnegative matrixP(i.e., the eigenvalue with the largest modulus). We call sp(P) the transi-tion intensity.
Matrix represents the transition rates that the Markov chain will move from the current node to a descendent of its parent. MatrixM represents the tran-sition rates that the Markov chain will move from the current node to its parent. Suppose that an integerkin
stringJ∈ ℵrepresents a typekcustomer. Then each element of matrix is the average number of cus-tomers who arrive in a unit period of time and each el-ement of matrixMis the average number of customers served in a unit period of time. MatrixPis the ratio of arrival rates to service rates. Intuitively, the classi-cation of Markov chain{Xn; n¿0}is determined by
matrixPin a way similar to the classication of ran-dom walks on nonnegative integers [8]. The objective of this paper is to show that this intuition is true and the classication of Markov chain{Xn; n¿0}is
de-termined completely by the Perron–Frobenius eigen-value sp(P) of matrixP.
Throughout this paper, it is assumed that Markov chain {Xn; n¿0} is irreducible, {(k; j); 06k6K;
16j6K}are nite, and matrixPis irreducible. Note that when matrixP is reducible, the set{1;2; : : : ; K}
can be decomposed into disconnected subsets each with an irreducible subset. Results obtained in this pa-per apply to these irreducible subsets. These assump-tions are not restrictive since they are satised in many applications.
3. The main theorem
In this section, a complete classication of the Markov chain {Xn; n¿0} dened in Section 2 is
obtained by using the mean-drift method or Foster’s criterion and matrix analytic methods.
LetR+be the set of nonnegative real numbers and
RK
+=R+× · · · ×R+. For vectorx= (x1; : : : ; xK), its
norm is dened askxk=|x1|+· · ·+|xK|. Note that
(x)irepresents theith element of vectorx. Let
≡ min
{b∈RK
+;kbk=1}
max
16k6K{((−M)b)k}
: (3.1)
Note. The set of vectorbin Eq. (3.1) can be chosen dierently. For instance, set{b: b∈RK
+andc16kbk
6c2}with 0¡ c16c2¡∞can be used for the same
purpose. Also the norm of vector x can be dened dierently. For instance,kxk= (x21+· · ·+x2K)0:5can be used.
Lemma 3.1. For the Markov chain {Xn; n¿0}
de-ned in Section2;a relationship betweenandsp(P)
is
(a) sp(P)¡1if and only if ¡0; (b) sp(P) = 1if and only if= 0; (c) sp(P)¿1if and only if ¿0.
Proof. Denote by u a right eigenvector ofP associ-ated with the Perron–Frobenius eigenvalue sp(P), i.e., Pu= sp(P)u andu 6= 0. SincePis nonnegative and irreducible, every element of vectoru is positive [3]. By the denition of matrixP, it hasu= sp(P)Mu. This leads to (−M)u= (sp(P)−1)Mu. Note that every element of vectorMu is positive.
Suppose that sp(P)¡1. Then (sp(P)−1)Mu¡0. Then
6max16k6K{((−M)u)k}
=max16k6K {(sp(P)−1)Mu)k}¡0:
Suppose that ¡0. Then there exists a vector b∗= (b∗1; : : : ; b∗K)T with b∗k¿0 and k b∗ k =1 such that (−M)b∗6e, where “T” represents the transpose of matrix ande is the vector with all elements one, which implies b∗¡ Mb∗ andPb∗¡b∗. Let y be the positive left eigenvector corresponding to sp(P). Then we have yPb∗ = sp(P)yb∗¡yb∗. Since y is positive andb∗is nonnegative and nonzero, we must have sp(P)¡1. This proves Part (a).
Next, we prove Part (c). Suppose that sp(P)¿1. If60, there exists a nonzero nonnegative vectorb such that (−M)b60, which implies that Pb6b. Letybe the positive left eigenvector corresponding to sp(P). ThenyPb6ybimplies that sp(P)61, which is a contradiction. Therefore, ¿0. Suppose that ¿0, but sp(P)61. Then (−M)u60, which implies that 60. This is a contradiction. Therefore, sp(P)¿1. This proves (c).
Part (b) is obtained from (a) and (c). This completes the proof.
Now, we are ready to state and prove the main the-orem of this paper.
Theorem 3.2. For the Markov chain ofM=G=1type with a tree structure — {Xn; n¿0} — dened in
Section2;it is
(1) positive recurrent if and only if sp(P)¡1 or equivalently ¡0;
(2) null recurrent if and only ifsp(P) = 1or equiv-alently= 0;
(3) transient if and only ifsp(P)¿1or equivalently
¿0.
Proof. The following proof consists of two parts. First, we prove that ¡0 is a necessary and sucient condition for positive recurrence of the Markov chain of interest, i.e., Part (1). Then we prove Part (3). As a result of Parts (1) and (3), Part (2) is obtained.
We begin with Part (1). To prove that ¡0 is suf-cient for positive recurrence of the Markov chain of interest, the mean-drift method is applied. The idea of the mean-drift method is to nd a Lyapunov func-tion (or test funcfunc-tion) f(J) dened on ℵ such that f(J)→ ∞when|J| → ∞and
E[f(Xn+1)−f(Xn)|Xn=J]
=X
H∈ℵ
a(J; H)f(H)−f(J)¡− (3.2)
holds for all but a nite number of states inℵfor some positive. If so, the Markov chain is positive recurrent [12]. Suppose that ¡0. Then there exists a vector b∗=(b∗
1; : : : ; b∗K)Twithb∗k¿0 andkb∗k=1 such that
(−M)b∗6e. When real numberis small enough,
we have (−M)(b∗+e)6e+(−M)e¡ e=2.
Chooseb= (b1; : : : ; bK) =b∗+ewith a small and
positive. Then every element of vectorbis positive and every element of vector (−M)bis less than=2. Choose==2. Then (−M)b¡−eholds. Based on the tree structure of the Markov chain of interest, the following Lyapunov function is introduced. For J∈ ℵ, dene
f(J) =
K
X
k=1
N(J; k)bk: (3.3)
Since every element of vectorbis positive,f(J)→ ∞when |J| → ∞. It is clear that function f(J) is additive, i.e.,f(J+H) =f(J) +f(H) forJ; H∈ ℵ. Applying the above Lyapunov function, the left-hand side of inequality (3.2) becomes, forJ=k1· · ·knand
k ¿0, X
H∈ℵ; H ¿0
[a0(k; H)f(J+k+H)
+a1(k)f(J+k) +a2(k)f(J)−f(J+k)
= X
H∈ℵ; H ¿0
{a0(k; H)[f(J+k+H)−f(J+k)]
+a1(k; H)[f(J+H)−f(J+k)]}
+a2(k)[f(J)−f(J+k)]
= X
H∈ℵ; H ¿0
{a0(k; H)f(H) +a1(k; H)[f(H)
−f(k)]} −a2(k)f(k)
= X
H∈ℵ; H ¿0
[a0(k; H)f(H) +a1(k; H)f(H)]
−
a2(k)f(k) +
X
H∈ℵ; H ¿0
a1(k; H)f(k)
= X
H∈ℵ; H ¿0
[a0(k; H) +a1(k; H)]
×
K
X
j=1
N(H; j)bj
−(k)bk
=
K
X
j=1
(k; j)bj−(k)bk
=((−M)b)k¡−: (3.4)
In the above evaluation, equalities in Eqs. (2.1) and (2.2) are used. Thus, inequality (3.2) holds for all J∈ ℵbutJ= 0. By assumptions made in Section 2,
{(0; j);16j6K} are nite, which implies that the left-hand side of inequality (3.2) is nite whenJ= 0. Therefore, the Markov chain is positive recurrent.
To prove the necessity of ¡0 for positive re-currence, denote byv(J) the mean rst passage time from nodeJ to root node 0 (v(0) = 0). According to Foster’s criterion, when the Markov chain is positive recurrent,{v(J); J∈ ℵ}are nite and satisfy
v(J) = 1 +X
H∈ℵ
a(J; H)v(H): (3.5)
The above equality leads to 0 = 1 + P
Ha(J;
H)[v(H)v(J)] by using the law of total probabil-ity. Because of the tree structure and the transition pattern of the Markov chain, functionv(J) is an ad-ditive function, i.e.,v(J+H) =v(J) +v(H), which
implies thatv(J) =PK
k=1N(J; k)v(k). This leads to,
forJ=k1· · ·kn andk ¿0,
−1 = X
H∈ℵ; H ¿0
{a0(k; H)[v(J+k+H)−v(J+k)]
+a1(k; H)[v(J+H)−v(J+k)]}
+a2(k)[v(J)−v(J+k)]
= X
H∈ℵ; H ¿0
{a0(k; H)v(H) +a1(k; H)
×[v(H)−v(k)]} −a2(k)v(k)
=
K
X
j=1
(k; j)v(j)−(k)v(k): (3.6)
Let C = (v(1); : : : ; v(K))T. Eq. (3.6) leads to
(−M)C=e. Sincev(k)¿1; 16k6K, i.e.,C6= 0, we obtain ¡0. This completes the proof of Part (1). We now prove Part (3), i.e., the Markov chain is transient if and only if ¿0. The idea is to look at the probability that the Markov chain will reach the root node from any other node eventually. When the Markov chain is transient, the probability should be less than one. Dene sequences{G(J)[n]; n¿0}for allJ∈ ℵas follows. Let G(0)[n] = 1, forn¿0. Let G(J)[0] = 0 forJ∈ ℵandJ ¿0, and
G(J)[n+ 1] =X
H∈ℵ
a(J; H)G(H)[n]: (3.7)
It is easy to see that{G(J)[n]; n¿0}is a uniformly bounded and nondecreasing sequence for eachJ∈ ℵ. In addition,G(J)[n] is the probability that the Markov chain reaches the root node withinntransitions, given that the Markov chain is in nodeJinitially. Note that the root node has become an absorption node. Denote by G(J) the limit of {G(J)[n]; n¿0} for J∈ ℵ. It has been proven in [16] thatG(J) is the conditional probability that the Markov chain will eventually reach the root node, given that the Markov chain is initially in nodeJ, for eachJ∈ ℵ.
When the Markov chain is transient, since ma-trix P is irreducible, G(J)¡1 for all J∈ ℵ and J ¿0. Denote byg(k) = 1−G(k) (¿0); 16k6K, and g(0) = 0. Because of the special tree struc-ture, it is easy to see thatG(J) =G(j|J|)· · ·G(j1)
for J =j1· · ·j|J|∈ ℵ, which leads to G(J) = (1−
Use the inequality to obtain 1−g(k) =G(k) = X
J:J∈ℵ
a(k; J)G(J)
¿ X
J:J∈ℵ
a(k; J){1−[(g(j1) +· · ·+g(j|J|)]}
⇒06 K
X
j=1
(k; j)g(j)−(k)g(k)
=((−M)g)k; (3.8)
whereg= (g(1); : : : ; g(K))T. If (−M)g= 0, then
it can be shown that the Markov chain is recurrent by using the mean-drift method and the Lyapunov function dened in Eq. (3.3) with the positive vector g. However, the Markov chain is transient. Therefore, we must have (−M)g¿0 and (−M)g6= 0, which implies sp(P)¿1 and ¿0.
Now suppose that ¿0. We shall show that at least one sequence {G(J)[n]; n¿0}, for J∈ ℵ, does not converge to one. First, letp(k)[n] be the probability that the Markov chain reaches nodeJfor the rst time at thenth transition, given that the Markov chain is in nodeJ+k initially. Then forJ=j1· · ·j|J|,
G(J)[n] =
n−1
X
t=1
p(j|J|)[t]G(J−j|J|)[n−t]
6 n−1
X
t=1
p(j|J|)[t]
!
G(J−j|J|)[n−1]
6G(j|J|)[n]G(J−j|J|)[n]6· · ·
6
1
Y
t=|J|
G(jt)[n]: (3.9)
In the above equation, the fact that{G(J)[n]; n¿0}
is nondecreasing with respect to n is used. Let g(k)[n] = 1−G(k)[n]; 16k6K. Since{G(k)[0] = 0; 16k6K}, it can be proved by induction that G(k)[n]¡1, which implies {g(k)[n]¿0; n ¿0}, for at least onek (16k6K). Then for at least onek, Eq. (3.7) becomes
G(k)[n+ 1] =X
J∈ℵ
a(k; J)G(J)[n]
6X
J∈ℵ
a(k; J)
1
Y
t=|J|
G(jt)[n]
=X
J∈ℵ
a(k; J)
1
Y
t=|J|
(1−g(jt)[n])
=1−X
J∈ℵ
a(k; J) 
1
X
t=|J|
g(jt)[n]
+ O
max
16k6K{g(k)[n]}
2!
=1−g(k)[n]
−
K
X
j=1
(k; j)g(j)[n]−(k)g(k)[n] 
+O
max
16k6K{g(k)[n]}
2!
¡ G(k)[n]−0:5kg[n]k+O(K kg[n]k2);
(3.10)
where g[n] = (g(1)[n]; : : : ; g(K)[n])T∈RK+ and g[n]
6
= 0. Since ¿0, there is at least one element of vector (−M)g[n] that is positive. The last inequality in equality (3.10) holds for at least onek.
Suppose that all sequences {G(J)[n]; n¿0}, for J∈ ℵ, converge to one. Then {g(k)[n]; n¿0} con-verges to zero uniformly for 16k6K. Whennis large enough or equivalently{g(k)[n]; 16k6K}are small enough, inequality (3.10) implies that, for at least one k; G(k)[n+ 1]¡ G(k)[n], which contradicts the fact that the sequence{G(k)[n]}is nondecreasing. There-fore, at least oneG(k) is less than one, i.e., the Markov chain is transient. This completes the proof of Part (c). Part (b) holds since Parts (a) and (c) are true. This completes the proof of the theorem.
Note. When the one-step transition of the Markov chain of interest is constrained to the parent of the current node or children and grandchildren of the par-ent node (the birth-and-death case), Theorem 3.2 has been proved in [2]. The method used in Theorem 3.2 to prove that sp(P)¡1 is a necessary and sucient condition for positive recurrence of Markov chain
{Xn; n¿0} is similar to that of Theorem 2:1 in [2].
transience of Markov chain {Xn; n¿0} is dierent
from the one used in [2]. In fact, the method used in [2] to prove the necessary and sucient condition for transience requires thata(J; H) = 0 ifkH| − |Jk¿ d for some integerd. Thus, that method cannot be used directly in the M=G=1 case. Besides, the condition a(J; H) = 0 if k H| − |Jk ¿ d for some integer d is restrictive when we study Markov processes associated with queueing systems of M=G=1 type.
Theorem 3.2 shows that the classication of the Markov chain of interest is completely determined by transition intensity sp(P) or the value of. Therefore, to nd whether the Markov chain is positive recur-rent, null recurrecur-rent, or transient, we only need to nd the eigenvalue sp(P) or the solution of Eq. (3.1) . There are many existing methods to nd the Perron– Frobenius eigenvalue of a nonnegative matrix, for in-stance, Elsner’s algorithm (Chapter 1 of Neuts [14]). The computation of can be done by transforming Eq. (3.1) into the following linear programming prob-lem:
= min {x}
s:t: (−M)b6xe;
be= 1; b∈RK+; −∞¡ x ¡∞:
(3.11)
The solution of the above linear programming prob-lem provides information for a complete classication of the Markov chain dened in Section 2. Similar re-sults have been obtained in the theory of queueing net-works. For instance, Kumar and Meyn [9] have proved that the solution of a linear programming problem pro-vides information for the stability of reentry network queueing systems.
For Markov chains on a tree structure, the simple rule used to determine whether or not a simple ran-dom walk on nonnegative integers is positive recur-rent is no longer useful. The reason is that there are several dimensions and the transition rates on one di-rection may fail to predict positive recurrence of the Markov chain. Surprisingly, Lemma 3.1 and Theorem 3.2 show that the classication of the Markov chain of interest is determined completely by the projections of arrival ratesand service ratesM on a special direc-tion inRK
+. This special direction is the eigenvectorb
corresponding to the Perron–Frobenius eigenvalue of nonnegative matrix P. The Markov chain is positive
recurrent if and only if the projection of the arrival rates of every type of customer on direction b−
(b)k— is smaller than the projection of service rates
M of that type of customer — (Mb)k. Similar
inter-pretations go to the null recurrent and transient cases. In addition, Eq. (3.1) shows that when the Markov chain is transient, for any direction inRK
+, there exists
at least one type of customer whose projection of ar-rival rates on that direction is larger than that of their service rates.
For queueing applications, we introduce the follow-ing decomposition technique. We decompose proba-bilitiesa0(k; H) anda1(k) into
a0(k; H) = ˆa0(k; H) + ˆa1(k; k+H);
H∈ ℵ; H ¿0; a1(k) = ˆa1(k) + ˆa1(k; k):
(3.12)
Note that there is no restriction on this decomposi-tion except that all the numbers must be nonnegative. An interpretation of such a decomposition is that the event{transition fromJ +k toJ+k+H} can be decomposed into two disjoint events with proba-bilities ˆa0(k; H) and ˆa1(k; k +H), respectively. Let
ˆ
a2(k) = a2(k) and ˆa1(k; H) = a1(k; H) when H =
j1· · ·j|H|∈ ℵand 0¡ j16=k. Transition probabilities
{aˆ0(k; H);aˆ1(k; H);aˆ1(k);aˆ2(k); H∈ ℵ; H ¿0}
de-ne the same Markov chain in understanding that the transition probabilities fromJ+ktoJ+k+H (H∈ ℵ) are given by Eq. (3.12). Dene
ˆ
(k; j) = X
J∈ℵ; J ¿0
[ ˆa0(k; J) + ˆa1(k; J)]N(J; j);
16k6K;16j6K; ˆ
(k) = ˆa2(k) +
X
J∈ℵ; J ¿0
ˆ
a1(k; J); 16k6K:
(3.13)
Let ˆ be a K ×K matrix with the (k; j)th ele-ment ˆ(k; j); 16k; j6K; Mˆ aK×Kmatrix with the (k; k)th diagonal element ˆ(k) and all other elements zero, ˆP = ˆM−1. Then the following relationshipsˆ hold:
ˆ
(k; j) =(k; j) for k6=j; ˆ
(k; k) =(k; k) +X
J∈ℵ
ˆ
a1(k; k+J);
ˆ
(k) =(k) +X
J∈ℵ
ˆ
ˆ
=+; Mˆ =M +; where
= 
       
X
J∈ℵ
ˆ
a1(1;1 +J)
. .. X
J∈ℵ
ˆ
a1(K; K+J)
        :
Dene ˆ by Eq. (3.1) with matrices ˆM and ˆ. Since ˆM −ˆ=M −, we obtain ˆ= (ofM and ). It is easy to prove that sp(P)6sp( ˆP)¡1 when sp(P)¡1; sp(P) = sp( ˆP) = 1 when sp(P) = 1; and sp(P)¿sp( ˆP)¿1 when sp(P)¿1. Thus, we have proved the following result.
Theorem 3.3. For ˆ and sp( ˆP) associated with matrices Mˆ and ;ˆ conclusions in Lemma 3:1 and Theorem3:2hold.
By Theorem 3.3, instead of looking foror sp(P), we can try to nd ˆor sp( ˆP) to determine whether the Markov chain of interest is positive recurrent, null re-current, or transient. This technique proves useful in solving the classication problem of some queueing systems where a way of decomposition arises naturally (Examples 5.1–5.4). For these cases,{(k; j)ˆ }( ˆ(k)), not {(k; j)}((k)), truly represents the arrival (ser-vice) rates of customers; sp( ˆP), not sp(P), is equal to the trac intensity. To avoid heavy notation, we shall only use notation without “∧” when the denition of
the Markov chain is clear.
4. Some interesting cases
In this section, we use some examples to show when the classication of Markov chain{Xn; n¿0}can be
easier. Basically, we want to know when sp(P) is less than, equal to, or larger than one without calculating the Perron–Frobenius eigenvalue ofPor solving linear programming problem (3.11). Let
(k) =
K
X
j=1
(k; j)
(j) ; 16k6K: (4.1)
Intuitively,{(k); 16k6K}have much to do with the positive recurrence of Markov chain{Xn; n¿0}.
In fact, the following conclusion holds.
Corollary 4.1. The Markov chain dened in Section
2is positive recurrent if(k)¡1; 16k6K.
Proof. By Theorem 3.2, we only have to prove that there exists a positive vectorbsuch that (−M)b¡0. Let bj = 1=(j), 16j6K. It is easy to show that
max16k6K{(( − M)b)k} = max16k6K{(k) −
1}¡0. Therefore, ¡0. Notice that vectorbshould be normalized to havekbk=1, but it is not essential. Thus, the Markov chain is positive recurrent. This completes the proof.
Corollary 4.1 shows that sp(P)6max16k6K{(k)}.
But condition(k)¡1, 16k6K, is unnecessary for positive recurrence of the Markov chain. The vector bintroduced in the proof of Corollary 4.1 is a special direction, which may or may not be the eigenvector corresponding to the Perron–Frobenius eigenvalue of matrixP. Failure of (−M)b¡0 in this particular direction does not imply that the Markov chain is not positive recurrent. Here is an example.
Example 4.1. Consider a Markov chain of M/G/1 type with a tree structure andK= 2. Transition prob-abilities (only nonzero ones) of this Markov chain are given as
a0(1;1) = 0:1; a0(1;2) = 0:2;
a0(2;1) = 0:1; a0(2;2) = 0:1;
a1(2) = 0:3; a1(1;2) = 0:3;
a2(1) = 0:4; a2(2) = 0:5:
By denition,(1;1) = 0:1,(1;2) = 0:2 + 0:3 = 0:5, (2;1) = 0:1, (2;2) = 0:1, (1) = 0:7, and (2) = 0:5. Then (1) = 0:1=0:7 + 0:5=0:5¿1 and (2) = 0:1=0:7 + 0:1=0:5¡1. On the other hand, for b = (1;1)T, max16k62{((−M)b)k}=−0:1. Thus, this
Markov chain is positive recurrent.
In Example 4.1, the Markov chain can go from node J= 1 to node J= 2 in one transition. This type of jump, from one branch of a tree to another without passing their parent node, makes the Markov chain more complicated. One would think that this is the reason why max16k6K{(k)}¡1 is not necessary for
positive recurrence. The following example shows that max16k6K{(k)}¡1 is not necessary for positive
Example 4.2. Consider a Markov chain of M/G/1 type with a tree structure and K = 2. Transition probabilities of this Markov chain are given as
a0(1;1) = 0:1; a0(1;2) = 0:1;
a0(2;1) = 0:3; a0(2;2) = 0:3;
a2(1) = 0:8; a2(2) = 0:4:
Then (1;1) = 0:1, (1;2) = 0:1, (2;1) = 0:3, (2;2) = 0:3, (1) = 0:8, and (2) = 0:4. For this example, (1) = 0:1=0:8 + 0:1=0:4¡1 and (2) = 0:3=0:8 + 0:3=0:4 = 9=8¿1. On the other hand, for b= (1;3:1)T, max
16k62{((−M)b)k}=−0:1.
Thus, the Markov chain is positive recurrent.
The following example is presented to show that, sometimes, sp(P) can be found without solving any equation.
Corollary 4.2. For the Markov chain dened in Section 2; assume that (k; j) = (j);16k; j6K.
That is;the arrival rates are independent of the cur-rent node. Thensp(P) =(1) =· · ·=(K)and the classication of the Markov chain is determined by
(1).
Proof. In this case, it is clear that(1) =· · ·=(K). By Theorem 3.2, we only have to prove that (1) is the Perron–Frobenius eigenvalue of matrixP. Let bk = 1=(k); 16k6K. It is easy to see that (−
M)b=(1)e−Mb. Therefore, sp(P) =(1). This completes the proof.
Finally in this section, we look at the measure∗= (1;1)=(1) + · · ·+(K; K)=(K). This is usually the trac intensity in queueing theory when multiple types of customers are present. In general, it is clear that∗is not enough to classify the Markov chain, but it is enough for some special cases. As in the special case dened in Corollary 4.2, sp(P) =∗.
Another special case of interest is when all the ser-vice rates are the same, i.e.,(1) =· · ·=(K). Then ∗= [(1;1) +· · ·+(K; K)]=(1). One question is
whether or not sp(P) =∗. This is not true in general.
Here is a counterexample.
Example 4.3. Consider a Markov chain of M/G/1 type with a tree structure and K = 2. Transition
probabilities of this Markov chain are given as a0(1;1) = 0:3; a0(1;2) = 0:1;
a0(2;1) = 0:1; a0(2;2) = 0:3;
a2(1) = 0:6; a2(2) = 0:6:
Then (1;1) = 0:3, (1;2) = 0:1, (2;1) = 0:1, (2;2) = 0:3, (1) = (2) = 0:6. In this example, ∗= 0:3=0:6 + 0:3=0:6 = 1. On the other hand, for b= (1;1)T, max
16k62{((−M)b)k}=−0:2, which
implies sp(P)¡1. Thus, the Markov chain is positive recurrent.
5. Queueing examples
In this section, the classication problem of a num-ber of queueing systems is studied by using the re-sults obtained in Sections 3 and 4. The key is to nd the Perron–Frobenius eigenvalue of matrixP for the queueing systems. We begin with a simple discrete time queueing model.
Example 5.1. (The GEO[K]=GEO[K]=1=LCFS pre-emptive resume queue). Consider a discrete time single-server queueing system with a marked geo-metric arrival process (the discrete time counterpart of the marked Poisson process He and Neuts [7] and Neuts [13]) and geometric service times. All cus-tomers are served on a last-come-rst-served (LCFS) preemptive resume basis. When a customer arrives (at the end of a period), if the current service is not com-pleted, it pushes the current customer in the server out and starts its own service. When a customer in queue reenters the server, its service time resumes at the point it was pushed out.
Letpk be the probability that a customer of typek
arrives by the end of a period, 16k6K, andp0is the
probability there is no customer arriving in a period. Then {p0; p1; : : : ; pK} describes the arrival process
with 1 =p0+p1+· · ·+pK. If a typekcustomer is in
service at the beginning of a period, its service will be completed by the end of the period with probabilityqk.
Thus, the service times have geometric distributions and{q1; : : : ; qK}describes the service processes.
Let Xn be the queue string at the beginning of
period n, including the customer in service. Clearly
{Xn; n¿0} is a Markov chain of M/G/1 type with
Markov chain are given as (after the decomposition dened by Eq. (3.12)):a0(k; j)=(1−qk)pj; a1(k; j)=
qkpj; a1(k) = (1−qk)p0, and a2(k) =qkp0. Then
the arrival rates and service rates are given as (k; j) =pj and (k) = qk. It is easy to see that
sp(P) ==p1=q1+· · ·+pK=qK. Therefore, Markov
chain{Xn; n¿0}of the queueing system is positive
recurrent if ¡1, null recurrent if= 1, or transient if ¿1.
Note. It is easy to see that=p1=q1+· · ·+pK=qK
pro-vides information for a complete classication of the GEO[K]/GEO[K]/1 queue with a FCFS, LCFS non-preemptive, LCFS preemptive resume, or LCFS pre-emptive repeat service discipline.
Example 5.2 (TheBGEO[K]=G[K]=1=LCFS nonpre-emptive queue).Consider a discrete time single-server queueing system with a marked batch geometric ar-rival process and general service times. All customers are served on a LCFS nonpreemptive basis. That is: the service of any customer will not be interrupted until it is completed.
Let pJ be the probability that a batch of type J
arrives by the end of a period forJ∈ ℵandJ ¿0, and p0 the probability that there is no customer arriving
in a period. Customers join the queue according to their order in the batch. Then{pJ; J∈ ℵ}describes
the arrival process. The distribution of service time S(k) of a typekcustomer is given by{qk(n); n¿1},
i.e.,P{S(k) =n}=qk(n). The mean numberEJ(k)
of typek customers who arrive during a unit period of time and the mean service timeES(k) of a typek customer are given as
EJ(k) = X
We observe the queueing system at customer de-parture epochs and the epochs when arriving cus-tomers nd an empty queue. LetXnbe the queue string
right after the nth departure or empty queue epoch.
{Xn; n¿0}is an embedded Markov chain of M=G=1
type with a tree structure. Let N(k) be the number of batches that arrived during the service period of a
typek customer. Let(k) be the string consisting of all customers who arrived during the service period of a typekcustomer, i.e.,(k) =J1+· · ·+JN(k), where
Jiis the string of customers who arrived in theith
pe-riod. The distribution of(k) is given as, forH∈ ℵ, The transition of Markov chain{Xn} is given by
Xn+1 = (J +k)−k+(k) =J +(k), given that
Xn=J+k, andXn+1is an arbitrary batch whenXn= 0.
Eq. (5.3) is intuitive since the average total num-ber of typejcustomers who arrived during the service time of a type k customer is the average number of typejcustomers who arrive in a unit period of time multiplying the mean service time of the typek cus-tomer. Then matrix P of the corresponding Markov chain is given as
P=M−1
Because of the special structure, the Perron– Frobenius eigenvalue of matrix P is given as sp(P) ==PK
k=1ES(k)EJ(k) (the classical trac
intensity of the queueing system), which determines the classication of the queueing system.
Note. It is easy to see that the classication of the BGEO[K]/G[K]/1 queue with a FCFS He [4], LCFS preemptive resume, or other work conserving service discipline is the same as that of Example 5.2.
Note. Let us consider a BGEO[K]/G[K]/1 queue withK= 3, arrival ratesEJ(1) = 1=12,EJ(3) = 1=6, EJ(3) = 1=4, and mean service times ES(1) = 4, ES(2) = 2, andES(3) = 1. Then=ES(1)EJ(1) + ES(2)EJ(2) +ES(3)EJ(3) = 11=12¡1, which im-plies that the queueing system is positive recurrent. Consider a BGEO[1]/G[1]/1 queue with arrival rate =EJ(1) +EJ(2) +EJ(3) = 0:5 and mean service
Thus, the trac intensity of the queueing system with a single type of customer depends on the set of weights used in estimating the service time of an arbitrary customer. In general, = can be far away from of the original queueing system. The two are equal (i.e.,==) when weights are chosen as wk=EJ(k)[EJ(1) +· · ·+EJ(K)]−1, 16k6K.
So far in this section, we have shown that the classi-cation of some queueing systems is determined com-pletely by the classical trac intensity. The reason is that these queueing systems are work conserving.
In the next example, the classication problem of a queueing system which is not work conserving is dis-cussed.
Example 5.3 (The BGEO[K]=G[K]=1=LCFS pre-emptive repeat queue). Consider a discrete time queueing system with a marked batch geometric ar-rival process, general service times, and a LCFS pre-emptive repeat service discipline. When a customer reenters the server, its service time has the same distribution as that of a new customer. Parameters dened in Example 5.2 are used.
LetXnbe the queue string at thenth service
comple-tion or arrival epoch.{Xn; n¿0} is a Markov chain
then Therefore, the Perron–Frobenius eigenvalue of matrix Pis given as According to Theorem 3.2, expression (5.9) pro-vides complete information for the classication of the queueing system of interest.
Next, we consider continuous time counterparts of the two queueing systems considered in Examples 5.2 and 5.3.
Example 5.4 (The BM[K]=G[K]=1=LCFS non-preemptive queue).Consider a queueing system with a marked batch Poisson arrival process and general service times. LetpJ be the arrival rate of a batch of
typeJforJ∈ ℵandJ ¿0. Then{pJ; J∈ ℵ; J ¿0}
describes the Poisson arrival process. Note that pJ
can be any nonnegative real number in this and the next examples. Letp0=PJ ¿0pJ. Thenp0is the
pa-rameter of the exponential distribution of interarrival times. The service time of a typekcustomer is given byFk(t) with Laplace Stieltjes transformf∗k(s).
Cus-tomers are served on a LCFS and nonpreemptive ba-sis. The mean arrival rate of typekcustomers and the mean service time of a typekcustomer are given as
EJ(k) =
respectively. LetXnbe the queue string right after the
nth departure epoch. Then {Xn; n¿0} is a Markov
chain of M/G/1 type with a tree structure. The distri-bution of(k) is given as, forH∈ ℵ,
= Z ∞
0
Fk(dt)
∞
X
m=1
exp{−p0t}(p0t)m
m!
×
mEJ(j) p0
=ES(k)EJ(j): (5.12)
It is clear that sp(P)==PK
k=1ES(k)EJ(k)
deter-mines the classication of the queueing system of in-terest. Again, this result applies to any BM[K]/G[K]/1 queue with a work-conserving service discipline.
Example 5.5 (The BM[K]=G[K]=1=LCFS preemp-tive repeat queue). Consider the queueing system dened in Example 5.4 when customers are served on a LCFS preemptive repeat basis. LetXn be the queue
string at thenth service completion or arrival epoch. Then {Xn; n¿0} is a Markov chain because of the
preemptive repeat service discipline. The transition probabilities of this Markov chain are given as
a0(k; J) =
Z ∞
0
(1−exp{−p0t})Fk(dt)
pJ
p0
;
a1(k) = 0; a2(k) =
Z ∞
0
exp{−p0t}Fk(dt):
(5.13)
Then we obtain
(k) =a2(k) =fk∗(p0); (5.14)
(k; j) = Z ∞
0
(1−exp{−p0t})Fk(dt)
EJ(j) p0
=(1−f
∗
k(p0))
p0
EJ(j): (5.15)
Therefore, the Perron–Frobenius eigenvalue of matrix Pis given as
sp(P) =
K
X
k=1
(1−fk∗(p0))
f∗
k(p0)
EJ(k) p0
; 0¡ p0¡∞:
(5.16) According to Theorem 3.2, the classication of the Markov chain or the queueing system is determined completely by the expression in Eq. (5.16).
6. Continuous time Markov chain of M=G=1 type with a tree structure
In this section, the continuous time Markov chains (Markov processes) of M/G/1 type with a tree
structure are discussed briey. Results obtained in Sections 3–5 hold for the continuous time case. Some of them are restated for the continuous time case in this section.
A Markov process{X(t); t¿0} is of M/G/1 type with a tree structure if it is dened on a K-ary tree and its transition rates are described as follows: the transition from nodeJ+kto
(1) J +k+H with ratea0(k; H) when H∈ ℵand
H ¿0;
(2) J+Hwith ratea1(k; H) whenH=j1· · ·j|H|∈ ℵ
and 0¡ j1 6=k;
(3) J with ratea(J+k; J) =a2(k);
(4) H with rate a0(0; H) when J = 0, H∈ ℵ and
H ¿0.
The sojourn time in node J +k has an exponen-tial distribution with parameter −[P
H(a0(k; H) +
a1(k; H)) +a2(k)] and −PH a0(0; H) for the root
node.
{(k; j); (k); ; M; P;sp(P)}are the same as those dened in Section 2. We assume that Markov pro-cess X(t) is irreducible, {(k; j), (k), 06k6K, 16j6K}are nite, and matrixPis irreducible.
Theorem 6.1. For the Markov process of M=G=1
type with a tree structure{X(t); t¿0}dened above;
it is
(1) positive recurrent if and only ifsp(P)¡1; (2) null recurrent if and only ifsp(P) = 1; (3) transient if and onlysp(P)¿1.
Proof. The proof is similar to that of Theorem 3.2. Details are omitted.
Example 6.1(TheM[K]=M[K]=1queue). Consider a queueing system with a marked Poisson arrival process and exponential service times. Let pk be
the arrival rate of typek customers, 16k6K. Then
{p0; p1; : : : ; pK} describes the arrival process with
p0=p1+· · ·+pK. If a typekcustomer is in service,
its service time has an exponential distribution with parameter qk. Suppose that the LCFS preemptive
resume service discipline is applied.
LetX(t) be the queue string at timet. Then the in-nitesimal generator of Markov process{X(t); t¿0}
are given as: a0(k; j) =pj, a1(k) =−qk −p0, and
a2(k) =qk. The total arrival rates and service rates are
that sp(P) ==p1=q1+· · ·+pK=qK determines the
classication of the Markov process or the queueing system.
Note. It is clear that the result can be generalized to a batch arrival case. It is also easy to see that the result can be extended to the multiple server case, i.e., the BM[K]/M[K]/s queue with a FCFS, LCFS nonpreemptive, or LCFS preemptive resume service discipline is classied by.
Note. It is easy to see that the decomposition tech-nique can be used in the continuous time case.
Acknowledgements
The author would like to thank Dr. Bhaskar Sen-gupta for his valuable comments and suggestions.
References
[1] G. Fayolle, V.A. Malyshev, M.V. Menshikov, Topics in the Constructive Theory of Countable Markov Chains, Cambridge University Press, Cambridge, 1995.
[2] A.S. Gajrat, V.A. Malyshev, M.V. Menshikov, K.D. Pelih, Classication of Markov chains describing the evolution of a string of characters, Uspehi Matematicheskih Nauk 50 (2) (1995) 5–24.
[3] F.R. Gantmacher, The Theory of Matrices, Chelsea, New York, 1959.
[4] Q.-M. He, Queues with marked customers, Adv. Appl. Proab. 28 (1996) 567–587.
[5] Q.-M. He, Classication of Markov processes of matrix M/G/1 type with a tree structure and its applications to queueing models, submitted for publication.
[6] Q.-M. He, A.S. Alfa, The MMAP[K]/PH[K]/1 queues with a last-come-rst-served preemptive service discipline, Queueing Systems 29 (1998) 269–291.
[7] Q.-M. He, M.F. Neuts, Markov arrival processes with marked transitions, Stochastic Process Appl. 74/1 (1998) 37–52. [8] J.G. Kemeny, J.L. Snell, A.W. Knapp, Denumerable Markov
Chains, D. Van Nostrand Company, Inc., Princeton, NJ, 1966. [9] P.R. Kumar, S.P. Meyn, Duality and linear programs for stability and performance analysis of queueing networks and scheduling policies, IEEE Trans. Automat. Control 41 (1) (1996) 4–17.
[10] V.A. Malyshev, Interacting strings of symbols, Russian Math. Surveys 52 (2) (1997) 59–86.
[11] V.A. Malyshev, Stochastic evolution via graph grammars, INRIA Research Report #3380, 1998.
[12] S.P. Meyn, R. Tweedie, Markov Chains and Stochastic Stability, Springer, Berlin, 1993.
[13] M.F. Neuts, A versatile Markovian point process, J. Appl. Probab. 16 (1979) 764–779.
[14] M.F. Neuts, Matrix-Geometric Solutions in Stochastic Models: An algorithmic Approach, The Johns Hopkins University Press, Baltimore, 1981.
[15] M.F. Neuts, Structured Stochastic Matrices of M/G/1 type and Their Applications, Marcel Dekker, New York, 1989. [16] T. Takine, B. Sengupta, R.W. Yeung, A generalization of
the matrix M/G/1 paradigm for Markov chains with a tree structure, Stochastic Models 11 (1995) 411–421.