• Tidak ada hasil yang ditemukan

1.2 Preliminaries

1.2.3 Minimal norm mappings

Given x, y ∈Kn, where K=R or C,the problem of finding a matrix ∆∈Kn×n such that

∆x=y has been solved in [30, 44]. The following result characterizes all such matrices.

Theorem 1.2.8. [44] Let x ∈Kn\ {0} and b ∈ Kn, where K ∈ {R,C}. Then there exist

∆∈Kn×n satisfying ∆x=b if and only if ∆ is of the form

∆ =bx+Z In−xx ,

where Z ∈ Kn×n is arbitrary and x is the Moore-Penrose pseudoinverse of x. Further the minimal spectral norm and Frobenius norm of such ∆ is kbk/kxk and is attained by

∆ =bx.

Complete solutions of more general mapping problems when ∆ is restricted to have some extra properties are presented in [33]. In this section we state a few results for some such mapping problems that are repeatedly used in the thesis with proofs wherever appropriate.

Consider the following Hermitian mapping problem.

Under which conditions on vectors x, y ∈ Cn does there exists a H ∈ Herm(n) satisfying Hx=y ?

The answer to this problem is well known, see, e.g., [33] where solutions that are minimal with respect to the spectral or Frobenius norm are also characterized. We also refer to [27] and [41] for the more general problem of the existence of a Hermitian matrix H ∈Cn×n such thatHX =Y for two matricesX, Y ∈Cn×m. The following theorem gives an answer to the Hermitian mapping problem in terms that allow a direct application in this thesis.

Theorem 1.2.9. Let x, y ∈ Cn, x 6= 0. Then there exists a matrix H ∈ Herm(n) such that Hx=y if and only if Im (xy) = 0. If the latter condition is satisfied, then

min

kHkH ∈Herm(n), Hx=y = kyk kxk and the minimum is attained for

H0 := kyk kxk

h y kyk

x kxk

i" yx

kxk kyk 1 1 kxxk kyyk

#1h

y kyk

x kxk

i

. (1.2.10)

if x and y are linearly independent, and for H0 := yx

xx otherwise.

Proof. The identity Hx = y immediately implies Im (xy) = Im (xHx) = 0, because H is Hermitian, and

kHk ≥ kyk/kxk=:c.

In particular, this proves the “only if”-part of the statement of the theorem.

Conversely, let Im (xy) = 0. Suppose first thatxandyare linearly independent. Then H0 given as in (1.2.10) is well defined and Hermitian, and we immediately obtain

H0h

x kxk

y kyk

i= kyk kxk

h y kyk

x kxk

i

which implies H0x = y and H0y = c2x. Thus, y±cx are eigenvectors of H0 associated with the eigenvalues ±c, respectively, which implieskH0k=c.

On the other hand, if x and y are linearly dependent, then y = α x with α ∈ R. Also the matrix H0 = αxx/kxk2 is Hermitian and satisfies H0x =y and since H0 has rank 1, kH0k=c.

The following result gives a minimal spectral norm Hermitian map that is also of minimal rank and minimal Frobenius norm.

Theorem 1.2.10. [33] Let x, b ∈ Kn \ {0} and M = {A ∈ Herm(n)|Ax = b}. Assume further that x, b are vectors such that Mis nonempty. Consider the following mapping

A:= kbk kxkU

"

R 0

0 0

# U,

whereR :=sgn(µ)ifb=µxfor some µ∈K, andR :=

"

1 0

0 −1

#

otherwise, andU can be taken as the product of at most two unitary Householder reflectors; when K=R, U is real orthogonal. Then A is the unique solution of the minimal rank problem minA∈Mrank(A) and the minimal Frobenius norm problem minA∈MkAkF. Moreover,

(1) kAk=kbk/kxk

(2) kAkF =kbk/kxk, when A has rank one.

(2) kAkF =√

2kbk/kxk, when A has rank two.

The following theorem from [25] contains solutions to the minimal Frobenius and min- imal spectral norm mapping problems that are also real matrices.

Theorem 1.2.11. [25] Let (x, y)∈Cl\ {0} ×Cq. Set

0 =



h Rey Imy i h

Rex Imx i

if Rex and Imx are linearly independent.

kxk2yx if Rex and Imx are linearly dependent.

Then

(1) ∆0 ∈Rq×l and ∆0x=y.

(2) inf{k∆k | ∆∈Rq×l, ∆x=y}=k∆0k.

Now consider the following problem. Let X ∈ Cm×p, Y ∈ Cn×p, Z ∈ Cn×k and let S ∈Cm×k. Define

S :=

∆∈Cn×m|∆X=Y, ∆Z =S .

Under which conditions on vectors X, Y, Z andS, is S nonempty? Further ifS 6=φ, then find the minimal spectral norm and minimal Frobenius norm maps from S.

We answer this problem and find matrices in the setS that are minimal with respect to the spectral norm and the Frobenius norm. In doing so, we follow the strategy suggested in [1] that invokes the following theorem of Davis, Kahan and Weinberger [12].

Theorem 1.2.12. [12] Let A, B, C be given matrices. Then for any positive number µ satisfying

µ≥max

"

A B

#,h

A C i

!

(1.2.11)

there exists D such that

"

A C B D

# ≤ µ. All such matrices D which have this property

are exactly of the form

D=−KAL+µ(I−KK)1/2Z(I−LL)1/2

where K = (µ2I−AA)1/2B, L= (µ2I−AA)1/2C andZ is an arbitrary contraction, that is, kZk ≤1.

Theorem 1.2.13. Let X ∈ Cm×p, Y ∈ Cn×p, Z ∈ Cn×k and S ∈ Cm×k. Assume that rank(X) = r1 and rank(Z) = r2. Consider the SVD: X =UΣV and Z = ˆUΣ ˆˆV, where U = [U1, U2], U1 ∈Cm×r1 and Uˆ = [ ˆU1, Uˆ2], Uˆ1 ∈Cn×r2. Then

(1) S 6=φ if and only if XS=YZ, Y XX =Y and SZZ =S.

(2) A∈ S if and only if A is of the form

A=Y X+ (SZ)−(SZ)XX+ (I−ZZ)R(I−XX) for some R∈Cn×m.

(3) Frobenius norm : The matrix

G(X, Y, Z, S) :=Y X+ (SZ)−(SZ)XX (1.2.12) is a unique map such that G(X, Y, Z, S)X =Y and G(X, Y, Z, S)Z =S and

inf∈Sk∆k2F =kG(X, Y, Z, S)k2F =kY Xk2F +kSZk2F −Trace (SZ)(SZ)XX .

(4) Spectral norm : We have

inf∈Sk∆k= max

kY Xk, kSZk =:µ (1.2.13) Indeed, the infimum in (1.2.13) is attained by the matrix

F(X, Y, Z, S) :=Y X+(SZ)−(SZ)XX+(I−ZZ) ˆU2RU2(I−XX), (1.2.14)

where

R = −K U1(Y X) ˆU1

L+µ(I−KK)12P(I−LL)12, K = µ2I−U1(Y X)ZZ(Y X)U112

( ˆU2Y XU1) and L = µ2I−Uˆ1(Y X)(Y X)1121(SZ)U2

, P being an arbitrary contraction.

Proof. Proof of (1): Suppose thatS 6=∅.Then there exists ∆ ∈Cn×m such that ∆X =Y and ∆Z =S.This implies that XS =YZ. Also the ranges ofY and S are contained in those of X and Z respectively. Since XX and ZZ are orthogonal projections onto the ranges of X and Z respectively, it follows that Y XX =Y and SZZ =S.

Now suppose that XS =YZ, Y XX =Y and SZZ =S. Then S 6= ∅ because for any R ∈Cn×m, the matrix

A=Y X+ (SZ)−(SZ)XX+ (I−ZZ)R(I−XX)∈ S.

Proof of (2): Let Σ1 ∈Rr1×r1 be the principal component of Σ formed by it first r1 rows and columns. Similarly let ˆΣ1 ∈Rr2×r2 be the principal component of ˆΣ formed by its first r2rows and columns. Also letV = [V1, V2] and ˆV = [ ˆV1,Vˆ2] be the conformal partitions ofV and ˆV, respectively, such thatX =U1Σ1V1andZ = ˆU1Σˆ11are condensed SVDs ofXand Z respectively. Let ∆∈ S. IfR(X) andR(Y) denote the ranges of X and Y respectively, then ∆ is a linear map ∆ : R(X)⊕R(X) → R(Z)⊕R(Z) and ∆ = ˆUUˆ∆UU. Set

∆ := ˆˆ U∆U =

"

A11 A12

A21 A22

#

where A11 ∈ Cr1×r2, A12 ∈Cr1×(mr2), A21 ∈ C(nr1)×r2 and A22 ∈C(nr1)×(mr2). Then k∆kF =k∆ˆkF and k∆k =k∆ˆk. Also

∆X =Y ⇒Uˆ∆Uˆ X =Y ⇒

"

A11 A12

A21 A22

# "

U1 U2

# X =

"

12

# Y

"

A11 A12

A21 A22

# "

Σ1V1 0

#

=

"

1Y Uˆ2Y

# . Thus

A11= ˆU1Y V1Σ11 and A21= ˆU2Y V1Σ11. (1.2.15)

Also,

Z =S ⇒U∆ˆZ =S ⇒

"

A11 A21 A12 A22

# "

12

# Z =

"

U1 U2

# S

"

A11 A21 A12 A22

# "

Σˆ11 0

#

=

"

U1S U2S

# . This gives

A11 = ( ˆΣ11)1SU1 and A12= ( ˆΣ11)1SU2. (1.2.16) From (1.2.15) and (1.2.16), ˆU1Y V1Σ1 = ( ˆΣ1)1SU1. Note that the later equality holds as XS=YZ. Therefore, we can write

∆ =ˆ

"

1Y V1Σ11 ( ˆΣ11)1SU2

2Y V1Σ11 A22

#

=

"

1Y XU11(SZ)U2

2Y XU1 A22

#

. (1.2.17) Thus

∆ = Uˆ∆Uˆ = ˆU11Y XU1U1+ ˆU22Y XU1U1 + ˆU11(SZ)U2U2+ ˆU2A22U2

= Uˆ11Y XU1U1+ (I−Uˆ11)Y XU1U1+ ˆU11(SZ)(I−U1U1) + ˆU2A22U2

= Y X+ (SZ)−(SZ)XX+ (I−ZZ) ˆU22A22U2(I−XX). (1.2.18) Proof of (3): Note that in the view of (1.2.17),

k∆k2F =k∆ˆk2F =

"

1Y XU1

2Y XU1

#

2

F

+kUˆ1(SZ)U2k2F +kA22k2F

= kY Xk2F +kSZk2F −Trace (SZ)(SZ)XX

+kA22k2F. Thus by setting A22 = 0 in (1.2.18), we get a unique map from S which minimizes the Frobenius norm, i.e.,

inf∈Sk∆k2F =kY Xk2F +kSZk2F −Trace (SZ)(SZ)XX . Proof of (4): Again consider the map ˆ∆ given in (1.2.17) and set

µ: = max(

"

1Y XU12Y XU1

# ,h

1Y XU11(SZ)U2 i )

= max

kY Xk, kSZk .

Then it follows that for any ∆∈ S, k∆k=k∆ˆk ≥ µ. Now by equation (1.2.11), we have k∆k=µ, i.e., inf∈Sk∆k=µ which is attained by

A22 = −K U1(Y X) ˆU1

L+µ(I−KK)12P(I −LL)12, where K = µ2I−U1(Y X)ZZ(Y X)U1

12

( ˆU2Y XU1), L = µ2I−Uˆ1(Y X)(Y X)1

121(SZ)U2

,

and P is an arbitrary contraction. Hence the proof follows by setting R =A22.

A particular case of Theorem 1.2.13 whenp=k = 1 is obtained in [24, Theorem 2]. We restate the result for this case and include explicit formulae which give minimal Frobenius norm and minimal spectral norm solutions that are also of minimal rank.

For u∈Cm, r∈Cn\ {0}, w ∈Cn and s∈Cm\ {0}, define Sˆ:=

∆∈Cn×m|rank(∆)≤2, ∆u=r, ∆w=s .

Theorem 1.2.14. Let u ∈ Cm, r ∈ Cn\ {0}, w ∈ Cn and s ∈ Cm \ {0} be such that kuk= 1 and kwk= 1. Define δ:=su, x:=r−δw and y:=s−¯δu. Then

(1) S 6ˆ=φ if and only if us=rw.

(2) Sˆ={Eβ ∈Cn×m|Eβ =xu+wy+δwu−βxy, β ∈C}. (3) Frobenius norm:

min

Sˆk∆k2F =krk2+ksk2− |su|2. (1.2.19) Moreover, (1.2.19) has a unique solution from Sˆ given by

∆ :=e h

w kxxk i"

su kyk kxk 0

#h

u kyyk i

if krk2 − |δ|2 6= 0 and ksk2 − |δ|2 6= 0. If krk2 = |δ|2 then (1.2.19) is attained by

∆ :=ˆ ws and by ∆ :=ˆ ru if ksk2 =|δ|2. (4) Spectral norm:

min

Sˆk∆k= max{krk, ksk}. (1.2.20) Moreover, if krk ≥ ksk then the minimum in (1.2.20) is attained by

∆ :=ˆ h

w kxxk i"

su kyk kxk −(su)kkyxkk

#h

u kyyk i

if krk2− |δ|26= 0 and by ∆ :=ˆ ws if krk2 =|δ|2. If krk ≤ kskthen the minimum in (1.2.20) is attained by

∆ :=ˆ h

w kxxk i"

su kyk kxk −(su)kkxykk

#h

u kyyk i

if ksk2− |δ|2 6= 0 and by ∆ :=ˆ ru if ksk2 =|δ|2.

Note that we will restate Theorem 1.2.14 in Chapter 3 (Theorem 3.1.3, page 44) and give a more simplified proof for the case when w = u. This has a direct application in finding backward errors of palindromic matrix polynomials.

Lemma 1.2.15. [7] Let ∆1, ∆2 ∈Cn×p. Then[∆11] [∆22] is a real matrix.

In the view of above lemma, the following corollary of the Theorem 1.2.13 gives a real minimal Frobenius norm solution of the mapping problem considered in the theorem.

Corollary 1.2.16. Let X ∈ Cm×p, Y ∈ Cn×p, Z ∈ Cn×k and S ∈ Cm×k be such that rank([X X]) = 2p and rank([Z Z]) = 2k, then there exist a real matrix ∆ ∈ Rn×m such that ∆X = Y and ∆Z = S if and only if XS = YZ and XTS = YTZ. In such a case, the matrix GR := G [X X][Y Y][Z Z][S S]

obtained by replacing X, Y, Z and S by [X X],[Y Y],[Z Z] and [S S] respectively in (1.2.12) is a real matrix with GRX =Y and GRZ =S. Furthermore, among all real matrices ∆ with ∆X =Y and ∆Z =S the matrix GR has the smallest Frobenius norm.

Proof. If ∆ is any real matrix with ∆X =Y and ∆Z =S,then ∆X =Y and ∆Z =S.

Hence ∆[X X] = [Y Y] and ∆[Z Z] = [S S]. By Theorem 1.2.13 there exist such a ∆ iff the conditions [X X][S S] = [Y Y][Z Z], [Y Y][X X][X X] = [Y Y] and [Z Z][S S][S S] = [Z Z] are satisfied. The first condition is equivalent to XS = YZ and XTS =YTZ and since rank([X X]) = rank([Z Z]) = 2k,the last two conditions are always satisfied. Therefore there exists ∆∈Rn×m such that ∆X =Y and ∆Z =S if and only if XS=YZ and XTS =YTZ.

Clearly, if the given conditions hold then the matrix GR satisfies GR[X X] = [Y Y] and GR[Z Z] = [S S]. Moreover by Theorem 1.2.13, among all matrices ∆ such that

∆[X X] = [Y Y] and ∆[Z Z] = [S S], GR has the smallest Frobenius norm. The fact that GR is real follows from Lemma 1.2.15.

1.2.4 Minimizing the maximal eigenvalue of a Hermitian matrix