• Tidak ada hasil yang ditemukan

Structured inverse least square problem

where DH =−D∈Ck×k, ZH =−Z C(n−k)×(n−k).

We mention that the results obtained above can easily be extended to the case when S⊂ {J,L}.

Theorem 2.5.2. Let S=Herm. Then for anyx, b∈Cn we have αS= |im(xHb)|

kxk .

Proof: Assume that x, b∈ Cn. Construct a unitary matrix Q= [x/kxk Q1] Cn×n such that QH1x= 0.Then construct the Hermitian matrixA=Q

"

a11 aH1 a1 A1

#

QH.Then we obtain

minA∈SkAx−bk2F = min

A∈S

°°

°°

°

"

a11kxk −xHb/kxk a1kxk −QH1b

#°°

°°

°

2

F

=|a11kxk−xHb/kxk|2+ min

a1Kn−1ka1kxk−QH1 bk2F. Choosea1=QH1 b/kxk.Next mina11R|a11kxk −xHb/kxk|2 is minimized whena11= re(xkxkH2b). Consequently, we obtainαS=|im(xHb)|/kxk.Next we have,

A = Q

"

re(xHb)

kxk2 (QH1b/kxk)H QH1b/kxk A1

# QH

= re(xHb)

kxk4 xxH+ 1

kxk2[xbH(I−xxH) + (I−xxH)bxH] + (I−xxH)Z(I−xxH) where Z=ZH Cn×n.¥

In a similar fashion we can obtain the solution of SILSP for skew-Hermitian matrices as follows.

Theorem 2.5.3. Let S=skew-Herm. Then for anyx, b∈Cn we have αS= |re(xHb)|

kxk . Proof: The proof is similar to Theorem 2.5.2.¥

Now assume thatS=J or S=Lwhere J andLare the Jordan algebra and Lie algebra corresponding to orthosymmetric bilinear / sesquilinear scalar product h·,·iM. Then its ev- ident by Theorem 2.5.1, Theorem 2.5.2 and Theorem 2.5.3 thet whenever S(x, b) = for a givenx, b∈Cn\ {0} then

minA∈SkAx−bk=













|hx, biM|

kxk2 , if (M A)T =−M A

|imhx, biM|

kxk2 , if (M A)H =M A

|rehx, biM| kxk2

, if (M A)H =−M A.

Now we consider SILSP for matrices. Before that we prove the following result which will be used in the subsequent development.

Lemma 2.5.4. Let α, β >0andb1, b2C.Thenminx∈C(|xα−b1|2+|xβ−b2|2)is given by x= αb1+βb2

α2+β2 .

Proof: Assume thatx=x1+ix2, b1=b11+ib12, b2=b21+ib22C.Then define φ(x1, x2) = |xα−b1|2+|xβ−b2|2

= (α2+β2)(x21+x22) + (b211+b212) + (b221+b222)2α(x1b11+x2b12)2β(x1b21+x2b22).

Setting ∂φ(x1, x2)

∂xi = 0, i= 1,2 we obtain the stationary points x1=αb11+βb21

α2+β2 , x2=αb12+βb22

α2+β2 . This gives the relative minimum as the Hessian matrix



2φ

∂x21

2φ

∂x1∂x2

2φ

∂x2∂x1

2φ

∂x22



=

à 2(α2+β2) 0 0 2(α2+β2)

!

is positive definite. Consequently we obtain the desired result.¥

Theorem 2.5.5. LetSbe the space of symmetric matrices andX, B∈Kn×k withrank(X) = r. Assume that the SVD of X = UΣVH where Σ =

"

Σ1 0

0 0

#

,Σ1 = diag(σ1, . . . , σr), σ1 >

. . . > σr>0, U = [U1 U2], V = [V1 V2] . Then

minA∈SkAX−BkF =kK◦(U1TBV1Σ1+ Σ1V1TBTU11−U1TBV1k2F+kU1TBV2k2F+kU2TBV2k2F is attained at

A=U1[K◦U1TBV1Σ1+K◦Σ1V1TBTU1]U1H+(BX)T(I−XX)+(I−XX)TBX+(I−XX)TZ(I−XX) where ZT =Z Cr×r, K = [kij], kij= σ21

i+σj2 and◦denotes the Hadamard product.

Proof: Assume thatX, K Kn×k with rank(X) =r.Consider the SVDX =UΣVH.Define a symmetric linear mapA : Range(X)Range(X) Range(X)Range(X). From the SVD of X given above it is easily seen that (U1, U2) and (U1, U2) are bases of Range(X) Range(X) and Range(X)Range(X) respectively. The block matrix representation ofA is of the form

A= h

U1 U2

i"

A11 AT12 A12 A22

# "

U1H U2H

#

where A11, A12, A22are of compatible sizes. Consequently we obtain kAX−Bk2F = kUTAU UHX−UTBk2F

=

°°

°°

°

"

A11 AT12 A12 A22

# "

U1HX 0

#

"

U1TB U2TB

#°°

°°

°

2

F

= kA11U1HX−U1TBk2F +kA12U1HX−U2TBk2F.

Then

A12Cmin(n−r)×rkA12U1HX−U2TBk2F = min

A12C(n−r)×rkA12U1HUΣVH−U2TBk2F

= min

A12C(n−r)×rkA12U1HUΣ−U2TBVk2F

= min

A12C(n−r)×rkA12Σ1−U2TBV1k2F+kU2TBV2k2F. MoreoverkA12Σ1−U2TBV1k2F is minimized if and only ifA12Σ1−U2TBV1= 0 i.e. if and only ifA12=U2TBV1Σ11 .Similarly we obtain

A11Cr×rmin,AT11=A11

kA11U1HX−U1TBk2F = min

A11Cr×r,AT11=A11

kA11Σ1−U1TBV1k2F+kU1TBV2k2F. Further assume thatA11= [aij], U1TBV1= [bij].Consequently we have,

kA11Σ1−U1TBV1k2F = Xr

j≤i,j=1

Xr

i=1

(|aijσi−bij|2+|ajiσj−bji|2)

= Xr

j≤i,j=1

Xr

i=1

(|aijσi−bij|2+|aijσj−bji|2)

The desired minimum can be obtained by minimizing|aijσi−bij|2+|aijσj−bji|2,for all i, j= 1 :r. By Lemma 2.5.4 the minimum can be obtained by

aij= σibij+bjiσj

σ2i +σ2j , aij =aji,∀i, j= 1 :r.

HencekA11Σ1−U1TBV1k2F can be minimized by

A11=K◦(U1TBV1Σ1+ Σ1V1TBTU1) whereK= [kij], kij =σ21

i+σj2 anddenotes the Hadamard product. Consequently, we obtain

A = U

"

K◦(U1TBV1Σ1+ Σ1V1TBTU1) Σ11 V1TBTU2

U2TBV1Σ11 A22

# UH

= U1[K◦(U1TBV1Σ1+ Σ1V1TBTU1)]U1H+U1Σ11 V11BTU2U2H+U1U2TBV1Σ11 U1H+U2A22U2H

= U1[K◦(U1TBV1Σ1+ Σ1V1TBTU1)]U1H+ (BX)T(I−XX) + (I−XX)TBX +(I−XX)TZ(I−XX), ZT =Z∈Cr×r

which gives the desired result.¥

Now we consider skew-symmetric matrices.

Theorem 2.5.6. Let S be the space of skew-symmetric matrices and X, B Kn×k with rank(X) =r.Assume that the SVD ofX=UΣVHwhereΣ =

"

Σ1 0

0 0

#

,Σ1= diag(σ1, . . . , σr), σ1>

. . . > σr>0, U = [U1 U2], V = [V1 V2] . Then

αS=kK◦(U1TBV1Σ1Σ1V1TBTU11−U1TBV1k2F +kU1TBV2k2F +kU2TBV2k2F

is attained at

A=U1[K◦U1TBV1Σ1−K◦Σ1V1TBTU1]U1H(BX)T(I−XX)+(I−XX)TBX+(I−XX)TZ(I−XX) where ZT =−Z∈Cr×rK= [kij], kij= σ21

i+σj2 and◦denotes the Hadamard product.

Proof: The proof is similar to the proof for symmetric case.¥

Next, we consider Hermitian matrices.

Theorem 2.5.7. LetSbe the space of Hermitian matrices andX, B∈Kn×k withrank(X) = r. Assume that the SVD of X = UΣVH where Σ =

"

Σ1 0

0 0

#

,Σ1 = diag(σ1, . . . , σr), σ1 >

. . . > σr>0, U = [U1 U2], V = [V1 V2] . Then

αS=kK◦(U1HBV1Σ1+ Σ1V1HBHU11−U1HBV1k2F +kU1HBV2k2F+kU2HBV2k2F is attained at

A=U1[K◦U1HBV1Σ1+K◦Σ1V1HBHU1]U1H+(BX)H(I−XX)+(I−XX)BX+(I−XX)Z(I−XX) where ZH=Z Cr×r, K = [kij], kij= σ21

i+σj2 and◦ denotes the Hadamard product.

Proof: Assume thatX, K Kn×k with rank(X) =r.Consider the SVDX =UΣVH.Define a Hermitian linear map A : Range(X)Range(X) Range(X)Range(X). From the SVD of X given above it is easily seen that (U1, U2) and (U1, U2) are bases of Range(X) Range(X) and Range(X)Range(X) respectively. The block matrix representation ofA is of the form

A=U

"

A11 AH12 A12 A22

# UH. Consequently we obtain

kAX−Bk2F = kUHAU UHX−UHBk2F

=

°°

°°

°

"

A11 AH12 A12 A22

# "

U1HX 0

#

"

U1HB U2HB

#°°

°°

°

2

F

= kA11U1HX−U1HBk2F +kA12U1HX−U2HBk2F. Then,

A12Cmin(n−r)×rkA12U1HX−U2HBk2F = min

A12C(n−r)×rkA12U1HUΣVH−U2HBk2F

= min

A12C(n−r)×rkA12U1HUΣ−U2HBVk2F

= min

A12C(n−r)×rkA12Σ1−U2HBV1k2F+kU2HBV2k2F. Now kA12Σ1−U2HBV1k2F is minimized if and only ifA12Σ1−U2HBV1= 0 i.e. if and only if A12=U2HBV1Σ11 .Similarly we have,

A11Cr×rmin,AH11=A11

,kA11U1HX−U1HBk2F = min

A11Cr×r,AH11=A11

kA11Σ1−U1HBV1k2F+kU1HBV2k2F.

Further letA= [aij], U1HBV1= [bij]Cr×r. Consequently we have kA11Σ1−U1HBV1k2F =

Xr

j≤i,j=1

Xr

i=1

(|aijσi−bij|2+|aijσj−bji|2)

= Xr

j≤i,j=1

Xr

i=1

(|aijσi−bij|2+|aijσj−bji|2).

Now the desired minimum can be obtained by minimizing|aijσi−bij|2+|aijσj−bji|2for all i, j= 1 :r. By Lemma 2.5.4 the minimum can be obtained by

aij = σibij+σjbji

σi2+σj2 , aji=aij,∀i, j= 1 :r.

Hence we obtain that kA11Σ1−U1HBV1k2F is minimized by A11=K◦(U1HBV1Σ1+ Σ1V1HBHU1) whereK= [kij], kij =σ21

i+σj2 anddenotes the Hadamard product. Consequently, we obtain

A = U

"

K◦(U1HBV1Σ1Σ1V1HBHU1) +Σ11 V1HBHU2

U2HBV1Σ11 A22

# UH

= U1[K◦(U1HBV1Σ1+ Σ1V1HBHU1)]U1H+U1Σ11 V11BHU2U2H+U1U2HBV1Σ11 U1H+U2A22U2H

= U1[K◦(U1HBV1Σ1+ Σ1V1HBHU1)]U1H+ (BX)H(I−XX) + (I−XX)BX +(I−XX)HZ(I−XX), ZH =Z Cr×r

which gives the desired result.¥

Now consider skew-Hermitian matrices.

Theorem 2.5.8. Let S be the space of skew-Hermitian matrices and X, B Kn×k with rank(X) =r.Assume that the SVD ofX=UΣVHwhereΣ =

"

Σ1 0

0 0

#

,Σ1= diag(σ1, . . . , σr), σ1>

. . . > σr>0, U = [U1 U2], V = [V1 V2] . Then

αS=kK◦(U1HBV1Σ1Σ1V1HBHU11−U1HBV1k2F +kU1HBV2k2F+kU2HBV2k2F is attained at

A=U1[K◦U1HBV1Σ1−K◦Σ1V1HBHU1]U1H(BX)H(I−XX)+(I−XX)BX+(I−XX)Z(I−XX) where ZH=−Z∈Cr×r, K= [kij], kij = σ21

i+σ2j and◦ denotes the Hadamard product.

Proof: The proof is similar to the proof for Hermitian case.¥

Now consider S ∈ {J,L}. Then for any given X, B Cn×k we have kAX −BkF = kM AX−M BkF. Therefore the SILSP problem can be resolved for S just replacing B by M B. Further the matrixA∈Swhich produces the minimum, can be obtained by redefining it as M A.

Chapter 3

Structured backward errors and pseudospectra of structured

matrix pencils

Structured backward perturbation analysis plays an important role in the accuracy assessment of computed eigenelements of structured eigenvalue problems. We undertake a detailed struc- tured backward perturbation analysis of approximate eigenelements of linearly structured matrix pencils. The structures we consider include, for example, symmetric, skew-symmetric, Hermitian, skew-Hermitian, even, odd, palindromic and Hamiltonian matrix pencils. We also analyze structured backward errors of approximate eigenvalues and structured pseudospectra of structured matrix pencils.

3.1 Introduction

Backward perturbation analysis determines the smallest perturbation for which a computed solution is an exact solution of the perturbed problem. On the other hand, condition num- bers measure the sensitivity of solutions to small perturbations in the data of the problem.

Thus, backward errors when combined with condition numbers provide an approximate upper bounds on the errors in the computed solutions.

With a view to preserving structures and their associated properties, structured preserv- ing algorithms for structured eigenproblems have been proposed in the literature (see, for example, [9, 10, 18, 46, 74, 75] and the references therein). Consequently, there is a growing interest in the structured perturbation analysis of structured eigenproblems (see, for exam- ple, [16, 38, 51, 54, 81, 95] for sensitivity analysis of structured eigenproblems).

The main purpose of this chapter is to undertake a detailed structured backward perturba- tion analysis of approximate eigenelements of linearly structured matrix pencils. Needless to mention that structured backward errors when combined with structured condition numbers provide an approximate upper bounds on the errors in the computed eigenelements. Hence structured backward perturbation analysis plays an important role in the accuracy assessment of approximate eigenelements of structured pencils. Further, it also plays an important role in the selection of an optimum structured linearization of a structured matrix polynomial.

This assumes significance due to the fact that linearization is a standard approach to solving a polynomial eigenvalue problem (see, for example, [39, 41] and the references therein).

We consider regular matrix pencils of the form L(λ) =A+λB,whereAandB are square matrices of size n. We assume L to be linearly structured, that is, L to be an element of a real or a complex linear subspace S of the space of pencils. More specifically, we consider ten special classes of linearly structured pencils, namely, T-symmetric, T-skew-symmetric, T-odd, T-even, T-palindromic, H-Hermitian, H-skew-Hermitian, H-even and H-odd and H-palindromic. These structures, defined in the next section, are prototypes of structured pencils which occur in many applications (see, [40, 75] and the references therein). We also considerSto be the space of pencils whose coefficient matrices are elements of Jordan and/or Lie algebras associated with the scalar product (x, y)7→yTM xor (x, y)7→yHM x,whereM is unitary and MT =±M or MH =±M. For example, whenM :=

"

0 I

−I 0

#

, the Lie and Jordan algebras associated with the scalar product (x, y) 7→yHM x consist of Hamiltonian and skew-Hamiltonian matrices, respectively. The structures so considered encompass a wide variety of structured pencils and, in particular, includes pencils whose coefficient matrices are Hamiltonian and skew-Hamiltonian. We show, however, that analyzing these wide classes of structured pencils ultimately boils down to analyzing one of the ten special classes of structured pencils considered above. Consequently, we consider these ten special classes of structured pencils and investigate structured backward perturbation analysis of approximate eigenelements.

So, letSbe the space of pencils having one of the ten structures. Let LSand (λ, x) C×Cn withxHx= 1.Then we define the structured backward errorηS(λ, x,L) of (λ, x) by

ηS(λ, x,L) := inf{|||4L|||:4LS and L(λ)x+4L(λ)x= 0}.

Here the pencil norm|||L|||is given by|||L|||:=p

kAk2+kBk2,where L(z) =A+zB andk · k is either the spectral norm or the Frobenius norm on Cn×n. The main contributions of this chapter are as follows.

Given (λ, x) C×Cn with xHx= 1 and L S, we show that there is a pencil K S such that L(λ)x+ K(λ)x= 0.Consequently, ηS(λ, x,L) <∞. We determine ηS(λ, x,L) and construct a pencil4LSsuch that|||4L|||=ηS(λ, x,L) and L(λ)x+4L(λ)x= 0.Moreover, we show that4L is unique for the Frobenius norm onCn×nbut there are infinitely many such 4L for the spectral norm onCn×n.Further, for the spectral norm, we show how to construct all such 4L. In either case, we show that if K S is such that L(λ)x+ K(λ)x = 0 then K =4L + (I−xxH)N(I−xxH) for some NS,where (I−xxH) denotes the transpose or the conjugate transpose of (I−xxH) depending upon the structure defined byS.Furthermore, we show that the unstructured backward errorη(λ, x,L) of (λ, x) is a lower bound ofηS(λ, x,L) and is attained by ηS(λ, x,L) for certain λ C. However, η(λ, x,L) 6=ηS(λ, x,L) for most λ∈C.

Next, we consider structured pseudospectra of structured matrix pencils. It is a well known fact that pseudospectra of matrices and matrix pencils are powerful tools for sensitivity and perturbation analysis (see, [100] and the references therein). We consider structured and

unstructured²-pseudospectra

σS²(L) :={λ∈C:ηS(λ,L)≤²} andσ²(L) :={λ∈C:η(λ,L)≤²}

of L,whereηS(λ,L) := min

xHx=1ηS(λ, x,L) andη(λ,L) := min

xHx=1η(λ, x,L),respectively, are struc- tured and unstructured backward errors of an approximate eigenvalue λ. When L is T- symmetric or T-skew-symmetric pencils, we show that ηS(λ,L) = η(λ,L) for the spectral norm andηS(λ,L) =

2η(λ,L) for the Frobenius norm. Consequently, for these structures, we show that σS²(L) = σ²(L) for the spectral norm and σS²(L) = σ²/2(L) for the Frobe- nius norm. For the rest of the structures, we show that there is a set Ω C such that σS²(L)Ω =σ²(L). For example, Ω =R when L is H-Hermitian or H-skew-Hermitian and Ω =iRwhen L isH-even orH-odd. Often the spectrum of L is symmetric with respect to Ω.When Ω does not contain an eigenvalue of L,it is of practical importance to determine the smallest perturbation 4LS of L such that L +4L has an eigenvalue in Ω. We show how to construct such a 4L.Indeed, we show that the equalityσS²(L)Ω =σ²(L)Ω plays a crucial role in the construction of such a4L.