ViemamJ Math (2013)41:313-321 DOI 10 1007/s 10013-013-0026-2
Interval Drazin Monotonicity of Matrices
Litismita Jena • Sabyasachi Pani
Received; 5 October 2012 / Accepted: 6 May 2013 / Published online 12 June 2013
© Vietnam Academy of Science and Technology (VAST) and Sponger Science-t-Business Media Sineapore 2013
Abstract The notion of interval Drazin monotonicity is introduced first. Then a characten- zation of interval Drazin monotonicity is presented using the notion of interval boundedness.
Finally a new result for Drazin monotonicity is obtained
Keywords Drazin inverse - Group inverse - Nonnegativity Interval boundedness - Interval Drazin monotonicity
Mathematics Subject Classification (2010) 1SA09- 15A48
I Infroduction
A real square matrix A is called monotone ii Ax >Q implies J: > 0 for all x e R", where y >0for{yi,y2 y,y = y€W means lhat y, > Oforalli = I, 2 , . . ,H. The concept of monotonicity plays an important role in Numerical Analysis, Optimization, and Economics.
Collatz [5] proved that a matrix is monotone if and only if A~' exists and A~^ > 0. where fi > 0 denotes that all the entries of the matnx B are nonnegative. He introduced such a class of matrices for solving systems of linear equations, which emerge upon employing finite difference techniques for elliptic partial differential equations Thereafter, the notion of monotonicity has been generalized in many ways. Mangasanan [11] extended the concept of monotonicity to real rectangular matrices and proved that monotonicity is equivalent to the existence of a nonnegative left inverse of A. Berman and Plemmons [2, Theorem 3] then generalized the notion of monotonicity for any rectangular matnces. Again, Berman and Plemmons characterized the nonnegativity of the group inverse (see [3, Theorem 1]). Later on Rye extended this notion for the Drazin inverse (see [15. Theorem 3] for more details). We refer the reader to the book [4) for more details. Mishraand Sivakumar [12[ showed that the least elements of polyhedral sets can be found in terms of the nonnegative Moore-Penrose
School of Basic Sciences. Indian Insniute of Technology Bhubaneswar. Bhubaneswar 751013. India e-mail: [email protected]
0 Springer
inverse, group inverse, and Drazin inverse. (See next section for the definitions of Moore- Penrose inverse, group inverse and Drazin inverse.) Jena and Mishra [7] obtained many results for nonnegativity of the Drazm inverse using several new matrix decompositioES, More recently, Jena and Pani [8] proposed an extension of Drazin monotonicity called index- range monotonicity and then presented a charactenzation of this notion using a subclass of index-proper splittings. Applications of the Drazin inverse lie in many areas such as singular differential and difference equations, Markov chain, cryptography, iterative methods, multi- body dynamics and optimal control.
Jayaraman and Sivakumar [6] introduced the notion of interval semi-monotonicity. Then they have obtained a characterization of interval semi-monotomcity using interval bound- edness of a matrix. Kurmayya and Sivakumar [10] also proved certain results for matrices having nonnegative Moore-Penrose inverse by extending the work of Peris [14].
The purpose of this article is to study nonnegativity of the Drazin inverse as well as interval nonnegativity ofthe Drazin inverse of a real square mauit. After a brief review of basic definitions and notations in Sect. 2, the notion of interval Drazin monotonicity is intro- duced in the next secfion. We then present a charactenzation of interval Drazin monotonicity among other results in the same section. Section 4 contains a result which shows that a real .square matrix A is Drazin monotone under a sufficient condition.
2 Preliminaries
This article deals with K" equipped with its standard cone M^, and all the matrices are real square matrices of order n unless stated otherwise The set of all m x « matrices over the real numbers R is denoted by IR"""'. We denote the transpose, the null space, and the range space of A e R'""" by 4'", N{A), and R(A), respectively, A is said to be nonnegative (le..
A > 0) if all the entries of A are nonnegative, and fi > C for matnces 8 andC,if S - C > 0 . Similarly, a vector x - (A:,) > 0 means that J:, > 0 for all i, and for vectors y and z,y>z means y - z> 0. Let L. M be complemenlary subspaces of R". Then PL,M denotes the projection of R" onto L along M. So. PIMB = B if and only if R(B) c L, and BPL.H = S if andonly if A'(fi) C M Tbe spectral radius of a matnx A isdenotedby >5(-4), andisequal to the maximum of die moduli of the eigenvalues of A.
The Moore-Penrose inverse of a matnyi A e R'""". denoted by A^, is the unique solution X of the equations
AXA = A. XAX = X, (AX)'^ = AX. and (XA)'^ = XA.
The index of A € R""" is the least nonnegative integer k such that rank(A*+') ^rank(-l').
and we denote it by ind/I. Then ind/I =k If and only if fi(A*) ffi A ' ( A ' ) = R " . Also,for / > k, R(A') = R(A'') and N(A') = N(A''). The Drazin inverse of a matnx A € R""" is the unique solution X e R"'" sausfying the equations:
A* ^ A''XA, XAX = X. and AX = XA,
where k is the index of A It is denoted by A'^. When k=\, then the Drazin inverse is said to be group inverse and is denoted by A". If ^4* exists, then we say A is group invertible.
While the Drazin inverse exists for all matrices, the group inverse does not. It existi if andonly if i n d A ^ 1 (i.e., R{A) ® N(A) = R'') If A is nonsingular, dien of course,«
have A"' = A^ = A^ A \s said to be semi-monotone and Drazin monotone if ,4'^ > 0 and A > 0. respectively. Similarly, A is group monotone if A* exists and A" > 0. Berman and 0 Springer
Interval Drazin Monotonicity of Matnces
Plemmons (2, Theorem 2] and [3, Theorem 1] proved the following charactenzations for semi-monolone and group monotone matrices. A is semi-monotone if and only if AJ: e R l + N(A^) andj: e R(A^) imply J: > 0. A is group monotone if and only if Ax €R^-t-A'(A) and .r e R(A) imply J: > 0. Thereafter, Pye [15, Theorem 3] established that A is Drazin monotone if and only if Ax eR^-i- N(A'-) and x e X(A*) imply jc > 0. In [1] some well- known properties of A° are fl(A*) ^ R(A°); N(A') ^ N(A'^y, AA" = PR(A*).W(^'I = , 4 ' ' A . In particular, if jr e ^(A*) then j : = A'^A.t.
The following result wiilbeused to prove the main results, and is apart of a well-known theorem called the Perron-Frobenius theorem.
Theorem 1 [16, Theorem 2.20] Lei A be a real square nonnegative matrix. Then A has a nonnegative real eigenvalue equal to Us spectral radius.
3 Interval Drazin Monotonicity
In this section the notion of interval Drazin monotomcity is studied. But before that let us recall some definitions.
Definition 1 [6, Definition 3.1] Let;r,y e R " be such that y - x > 0 . Then the set [j:,y] : ^ [z€R" • X <z<y] is called an order interval.
Here, we say an order interval as interval. When none of the components of v is finite, then the interval becomes the nonnegafive orthant bases at x and is denoted by [x. oo) All the intervals [x, y] in this section have J: > 0 and y - jt > 0, We then present the definition of interval boundedness.
Definition 2 [6. DefiniUon 3.3] A e E"'"" is said to be interval bounded if for some interval J^ in R", there exists an interval Ji in R"' such that AJi c J..
The example given below discusses the above definition.
Example I Let A = ( ^ | ) , J, = [(0, I)'", (0,5.2.5)'"] and 7: - [(0.5, l ) ^ (2 5.3)^]. Let z = (x, y)^ e J\, where x and y He in the open interval (0.0,5) and (1,2.5). respectively.
Then we have Az = Q \){\) = (y. y)'" e J2. v lies in (1, 2.5) - (0 5. 2.5) n (1.3) Since for any element z€ Jt.we have A: G J2 So A(J\) c J^
Motivated by [6, Definilion 3.121, we now propose the definition of the interval Drazin monotonicity. From now on, the symbol k always means the index of a matrix and B'' means B is a matnx of index k.
Definitions AsR"'" is said lo be mtenal Drazm monotone if. for some interval J: inR", there exists an interval Ji in R" such thai Av e 72 + Af(A*) and.v e/?(/l*) imply .v e 7,,
For k=\, Ihis becomes interval group monotonicity [6, Definition 3.15|. For nonsmgular A (Ihe case A: = 0), interval Drazin monotonicity reduces to interval monotonicity given in
|6, Definnion3 6].
If y, = 7-, = [0„, ->^) = R^. then interval monotonicity is same as usual monotonicity.
A characterization of interval Drazin monotonicity is shown next.
© Sprmger
Theorem 2 A matrix A is interval Drazin monotone if and only if A is interval bounded.
Proof Let A be interval Drazin monotone. Then for some interval J2 in K", there exists an interval / i in R" such that AJ: e 7; + N(A^) and x e /?(A*) imply J: e 7]. Next, we will prove that A^(72) c 7i. For UJ e J2, let y = AA'^w, z = (l - AA°)ui so that UJ = y - f ; with y € R(A'^) ^ ff(A*^) and z e N(A'') ^ ^(A*-). Therefore AA°y = y = u) - : € V:-F WCA'). Also A°y e fi(A°) = R(A*) and A^u; ^ A ° y . Thus A^io e 7] using the fact thai A is interval Drazin monotone. Hence ^4" is interval bounded.
Conversely, suppose that A^ is interval bounded. So, there exists an interval 7] inR' such that A°(J2) ^ 7, for some interval J2 in R". Let x e ff(A') and Ax = b-i-c wheie be J2 andc e A'(A') — N(A'^). Pre-multiplying A'^ to AJ: — ii-f-c, we have A'^Ax = x = A'^b€ A^(J2). Since A'^(72) c 7|, so J e 7]. Hence A is interval Drazin monotone. D
Here we remark that Theorem 2 extends [6, Theorem 4.1]. The next result is by Pye [15]
and can be obtained using the above theorem by taking 7i = R'^. and 7; = R|J..
Corollary 1 [15. Theorem 3] A ° > 0 ifandonly if Ax € Rlj. + N(A'') andx 6 R(A'^) imply x>0
We then present an analogous result to the sufficient part of [6, Theorem 4.10] for the case of Drazin inverse.
Theorem 3 Consider the system Ax — b. Suppose that A>Q and there exist s,t (s:^l)e R(A'^) and s > 0 such that s < A'^bst. Then A is interval Drazin monotone.
Proof By Theorem 2, it is equivalent to show that A ° is interval bounded. Set Ji = [H, V]
and 7| = [s. r], where u = As and v — At Then we have « > 0, u > 0 and u - H > 0 as ^.5 and( are nonnegative. If u - w = 0 , t h e n f - j €N{A)£ N(A'-). Also t-s e ff (/I*). Hence / - 5 ^ 0 s o t h a t ( ^ 5 . B u t ( / 5 Therefore y - u > 0. We then prove lhat A°(72) c7,.Lei w EJ2. Then w ^ pu-}-(\ - ix)v, where 0 < ^ < i. So A" w = pA'^u-\-(\ -p)A''v = pA"As + (\ - n)A'^At ^ps-\- (1 - p)t e [s.i] = 7|, which in tum Implies that/l" is
interval bounded D The converse of the above one holds with addition of some extra conditions, as is pre-
sented below.
Theorem 4 Let A be interval Drazin monotone and A^(J2) is not a singleton. Ifi>£
AA''(J2), then there exist S.I (s ^l) e R(A^) and s >0 such that s < A'^b <l.
Proof By Theorem 2, there exisi intervals 7] and 72 = [w, v] such that A'^(J2) Q J] Smce b e AA''(72), so b = AA''w for some w€ 72. Let U; = ^ H - K I - ii)v, where 0 < ; i < ' • Then b ^ pAA'^u + (\ - p)AA''v. Hence A^fe = pA'^u -H (1 - H)A''V = /XJ + H - p)l G [ j . r ] ^ 7 i , vjheie s = A'^u, l = A'^v. So J > 0 as A'^U € 7 , and f - 5 > 0. Hence
s < A'^b <t andt ^s as A^(J2) is not a singleton •
^ Springer
Interval Drazin Monoioniciiy of Matrices 317 4 Nonnegative Decompositions Satisfying an Eigenvalue Property Imply
Nonnegativity of Drazin Inverse
Let us begin this section wilh the definifion of the nonnegative decomposition of a mabix.
A decomposition A = U - V is called nonnegative if t/ > 0 and V > 0. Clearly, all real matrices have such decompositions. A sufficient condiUon for nonnegativity of the Drazin inverse of a real square matrix A is shown first. The proof is analogous to the proof of [10, Theorem 3.1].
Theorem 5 Suppose thai A € R""" satisfies the following condition:
Whenever A = U — V is a nonnegative decomposition, there exist 0 jt y € R i n f f ( A * ) and lie [0,1) such that Vy — p.Uy.
ThenA''>0.
Proof We wil! use the method of contradiction to prove this. Let -4^* = (a,j) ^ 0 . Then there exist (0 and Jo such that a,Qj(, < 0 . Define an « x n matrix U = (w,^) forall j — 1,2,3 n with
'b. iii^jo.
b-i-q. ifi=jQ,
where b and q are arbitrary positive numbers chosen m such a way that V — [/ - A > 0.
So, we have A = U -V Hence there exist p e [0, 1) and 0 7^ y € R'; n R(A'-) such that Vy = ^ t / y . Therefore/ll =(U -V)y = (l - p)Uy. Selling w = Uy. we gel
A^'w = A^Uy = A'^f-^Ay) = -^A^Ay = - ^ y > 0.
Again w = Uy, y = (yi, y2- • • • • ynV implies foO'l+.V2+ ••• + >•").
{h-\-q)(yt+y2 + --- + y.,:
for 7 7^ 70, 1. for J = ; o , where j vanes from 1 to «. So, if 7 ^ ja, then
w, b_
Wjg b-\-q
Let 5 = max 1 <^<„ ( j ^ ^ } . Choose q big enough such thai b 1 b-\-q H5 + r we have
{A"W)^ ={a,,,i.a,^2 o,^n a,„„) • (wi,u'2.. .«'„• • "'"' b + q)
S Springer
^ i " - ! ™ " ' " - - - ' '^ibh'bh '••
' b + q)Ib + q b + q
\sln-l) ,1 a contradiction. Thus A" >G.
sb 1 'b + q]
Example 2 Let 0 0 0 5 0 0
The index of A is 2 Hence A — U -V \5S nonnegative, where V = A.
to
0 0lo
0 10 0 0
2 0 10 0
2 0 0 0
1 1 1 I
(0\
25 0
\«) (^\\
r
• 2 5
lo/
R{A'-) — span
Then 0 7^ (0, 25,0,0)'' = y e R^ n R(A^) and 0.5 = pe [0,1) satisfy Vy = pVy So A°
is nonnegative where
/O 0 1/25 0\
0 1/5 0 0 0 0 1/5 0 VO 0 0 oy
We then prove the converse of the above result with addition of some conditions. TTie technique used in the present proof is completely different from the one used in [10, The- orem 3.5|, But we adopt a similar lechnique as in [13, Theorem 5.1]. The present proof is much simpler and uses only the Perron-Frobenius theorem
Theorem 6 Let A ^ M."'-" with 0 ^ A " > 0. If A = U — V is a nonnegative decomposition with U ^0,U IS group invertible and R(U) ^ R(A''}, then there exist 0 ^ A: e R^ n ft{yl').
H e [ 0 , \) such that Vx = pUx.
Proof Since A^' > 0 and (/ > 0, so A^U > 0. The Perron-Frobenius theorem (Theorem I) then implies A'^Ux = kx with J. > 0 and JC e R!;. If X = 0, then A^U is nilpotent Let / be the least posirive integer such that ( A ° [ / ) ' = 0. If / = I, then A'^U = 0, ie..
t / = / l A ' ^ t ; :^ 0, a contradiction. So / > 2. Set 5 = ( / l ^ f / ) ' - ' . Then R{S) c R{A'^) = R{A'') = R(U).
Also, for any z € R",
usz = U{A"U)'~U = AA'^U{A'^U)'~\ = A{A'^U)'Z=O.
^ Springer
Interval Drazin Monotonicity of Matnces 319 Therefore 5£eyV(t/). Hence Sz = Oforall z e R " (as U is group invertible). contradicting die mimmafity of/. Thus A. > 0. Clearly, x s ff(A*). Pre-multiplying A to A'^UX = U , we get
AA^Ux = kAx = XUx - kVx.
Since ff(f/) = ff(A*), then
AA''Ux = P^,^k,^^,
Vx = pUx with p. - - € [ 0 , 1).
If A < 1, then Vx < 0. Since V > 0 and JC > 0, this means that Vx = 0 Consequently Ux -0 so lhat Ax = 0 But x e ff(^*). So J: — 0, a contradiction. Hence A > 1 and so
^ e ( 0 , 1 ) . D Example 3 Let
Then indA = 2 a n d O ? t > \ " > 0 . Setting
\0 2 0 /
/O 2 2 \ /O 2 U=\0 1 I and V=lo 0
\0 2 2) \0 0 ve de
= «(/
(i)
we get /I = (/ - V is a nonnegative decomposition with U ^0. U is group invertible (i.e., rankC/) = rank(L'^)) and R(U) = R(A-), as
ff(-4-) = span and ff(t/) = span
Then we have 0 ^ ( 2 . I, 2)'" = y e R ^ n ff(A') and 2/3 = /i € [0.1) such that Vy^pUy.
Nole dial [14, Theorem 1] follows from the above two theorems in case of nonsmgular matrix A. The next corollary is motivated by [10, Corollary 3.6] and is presented below.
Corollary 2 LelO^A e R " " " . Suppose that A'^ >Oand A has a nonnegatne decomposi- tion A = U - V with R(U) C R{A'').Ifp is a real number such thai Vx = nU x for some O^x GRin R(A'-), then p < x = 1 - •^^.where C = (A"U)\H and M is the linear span o/R;nff(/^*).
© Springer
320 L Jena,S.Piiii
Proof Let Vx = pUx with 0 7^ Jt e R^ n ff (A*). Then Ax = Vx-Vx = (\- p.)Ux. If Ax=0,then A*jc ^ 0 . Hence JC e R(A'')C\N(A''). SO ;C = 0, a contradiction. Thus/ljr^O.
Therefore /i / 1 and CJC = j ^ J t which imply j 4 ^ > 0 a s C > 0 a n d j : e M : ; _ n ff (A*). So Y ^ is a positive eigenvalue with the eigenvector JC for C. Hence -^ < p{C),\.t., p<y =
The next result concludes this section, with an analogous result to [2, Corollary 4] for the Drazin inverse.
Theorem 7 Let A'^ > 0 and consider the system Ax=b Let x\ and xj be approxinmtiom in ff (A*) to A^b such that
Axi-b<Q< AX2 -b.
Thenxi <A'^b<xi.
Proof Since A ° > 0, so pre-multiplying A ' ' to AJCI — 6 < 0 < A^2 — b and simplifying this, we get
Xi=A^Ax\ <A'^b<A°Ax2=X2. ° 5 Conclusions
We hope that the present work will motivate further study of interval monotonicity for square singular matnces, and its applications and generalizations. A first step in this direction was made in [6] where interval semi-monotonicity was discussed. It would be also desirable to know how this monotonicity can be further generalized and other charactenzations of these notions can be established. Some possible extensions of interval monotonicity are interval row monotonicity, interval range monotonicity, interval index-range monotonicity and inter- val weak monotonicity. In this article, a characterization of interval Drazin monotonicity in terms of interval boundedness is first established. Then a sufficient condition for the nonneg- ativity of the Drazin inverse is provided. One of the main tool for showing the nonnegalivity of the Drazin inverse is the idea of nonnegative decompositions of matrices.
We also note that another main result (Theorem 5) of this paper can be applied to find whether a square singular matrix has a nonnegative Drazin inverse or not. However, the lim- itation of this result is its strong hypothesis. Hence Drazin monotonicity of matnces is sull studied by many authors as the Drazin inverse has many applications such as solving sys- tems of differential equations and systems of linear equations. In fact, for a square singular system of linear equations Ax — b, where x.b eR" and A e R""^", the solution A'^b lies in the Krylov subspace of (A.b), i.e., K,(A,b) =s^an{b.Ab. A^b A'~^b]. The con- nection between Drazin inverse and Krylov subspace is now recalled. A'^b is a solution of Ax = b,b€ ff(A*) if and only if/IJC = 6 has a solution in K„(A, b), where n is the order of the matrix A. Finally, we close the last section with a result that has a possible application to numerical analysis, of knowing when A is Drazin monotone.
We conclude this section with the note that Drazin monotonicity can also be applied lo the study of the solution of a Linear Complementarity problem More on this can be found in [9].
Acknowledgements Ttie authors thank the anonymous referees for their cotninents, which have impn)i*<l Ihe article
S Springer
Inierval Drazin Monoioniciiy of Matnces
References
1, Ben-Isiael. A . Greville. T N.E.' Generalized Inverses Theoiy antl Applications Spnnger New York (2003)
2. Berman. A.. Plemmons, R.J. Monotonicity and the generaljied inverse SI AM J, Appl Math 22 155- 161<I972)
3 Betman. A., Plemmons. R J.: Matnx group monotomcity. Proc Am. Math, Soc 46. 355-359 ( I 9 7 4 | 4. Berman. A., Plemmon^, R.J • Nonnegative Mauices in the MathemaUcal Sciences. SIAM, Philadelphia
(1994)
5 Collatz. L. Aufgaben monotoner An Arch Math 3,366-376(1952)
6. Jayaraman, S . Sivakumar, K C.'Matrix interval monotonicity. Demonslr Math XLIII, 1-10 (2010) 7. Jena, L , Mishra, D So-splittings of matnces Linear Algebra Appl 437, 1162-1173 (2012) g. Jena, L , Pani, S • Index-range monotomcity and index proper splittings of matrices Numer Algebra
Conmsl Optim 3,379-388(2013)
9 Jena, L , Pani, S,: PD matnces and linear coraplementanty problems Submitted
10 Kurmayya, T., Sivakumar, K.C Positive splittings of matnces and their nonnegative Moore-Penrose inverses Discus. Math., Gen Algebra Appl 28. 227-235 (2008)
j] Mangasanan,OL:Charaetenzaiionofrealmamcesofmonotonekind SIAMRev.lO 4.^9—141(1968) 12 Mishra, D., Sivakumar, K C ; Nonnegative generahzed inverses and least element-S of polyhedral sets.
Linear Algebra Appl 434, 2448-2455 (2011)
13. Mishra, D.. Sivakumar K C On splittings of matrices and nonnegative generalized inverses. Oper. Ma- mces 6, 85-95 (2012)
14 Pens, J.E : A new charactenzaiion of inverse-posiuve matnces Linear Algebra Appl. 154-156. 45-58 (1991)
15. P y e , W C . Nonnegative Drazm inverses Linear Algebra Appl 30.149-153(1980)
16. Varga, R.S • Mainx Iterative Analysis Spnnger Senes in Computational Mathematics Springer, Berlin (2000)
^ - S p n n g e r