• Tidak ada hasil yang ditemukan

Distance to nearest singular matrix polynomials

Given a square regular matrix polynomial P(λ),the distance δ(n−1)s (P) to a nearest singular matrix polynomial is an important special case of the distance to nearest matrix polynomials of normal rank at mostr considered in the previous section. In this section we formulate the computation of δ(n−1)s (P) in terms of different opti- mization problems.

The characterization of singular matrix polynomials in Corollary 5.2.1 and the remark following it implies that for a given ∆P(λ) = Pk

i=0λi∆Ai, (P + ∆P)(λ) is singular if either Cj(P + ∆P) or Cj((P + ∆P)T) where 0 6 j 6 k(n−1) is rank deficient for some j. Forj = 0, . . . , k(n−1),and s= 2, F, let

γj(s)= inf{|||∆P|||s:Cj(P + ∆P) is rank deficient}, (5.4.1) ηj(s)= inf{|||∆P|||s :Cj((P + ∆P)T) is rank deficient }. (5.4.2) Observe that γj(s)j(s)) computes the distance to a nearest singular matrix polyno- mial with a right (left) minimal index in the set {0, . . . , j}.Also clearly

γ0(s) >· · ·>γk(n−1)(s) , and η(s)0 >· · ·>ηk(n−1)(s) .

We will refer to these as the γ-sequence and the η-sequence respectively. The fol- lowing theorem gives the distance to a nearest singular matrix polynomial from P(λ) and its connection with the right and left minimal indices of a nearest singular matrix polynomial.

Theorem 5.4.1. Let P(λ) = Pk

i=0λiAi be an n×n matrix polynomial of degree k.Suppose that itsγ-sequence andη-sequence are given by (5.4.1) and (5.4.2) respec- tively withs= 2 or F.Letj0be the smallest indexj such thatγj(s)

0 = min06j6k(n−1)γj(s) and i0 be the smallest index i such that ηi(s)

0 = min06i6k(n−1)η(s)i . Then the following hold.

(a) If either j0 or i0 is equal to 0 or k(n−1), then δ(n−1)s (P) = min{γ0(s), η0(s)}.

In particular,

δ(n−1)F (P) = min

σmin

h

AT0 · · · ATk iT

, σmin

h

A0 · · · Ak

i .

(b) There exists a matrix polynomial (P + ∆P)(λ) such that |||∆P|||ss(n−1)(P) withj0 (i0) as its least right (left) minimal index. Also the right (left) minimal indices of any other singular matrix polynomial Q(λ) satisfying |||Q−P|||s = δ(n−1)s (P) cannot be less than j0 (i0).

(c) The distance δs(n−1)(P) = γj(s)0i(s)0k(n−1)−i(s)

0(s)k(n−1)−j

0.

(d) The largest left (right) minimal index of any singular matrix polynomial (P + ∆P)(λ) = Pk

i=0λi(Ai + ∆Ai) such that |||∆P|||s = δs(n−1)(P) is at most k(n−1)−j0 (k(n−1)−i0).

(e) The distance δs(n−1)(P) = minn

γq(s), ηq(s)

o

, where q := jk(n−1)

2

k

. Moreover, δ(n−1)s (P) =γq(s)q(s) if and only if 16i0, j0 6q.

Proof. If i0 = 0 or j0 = 0 then part (a) clearly holds. Suppose that j0 = k(n−1).

Thenγk(n−1)−1(s) > γk(n−1)(s)F(n−1)(P).Let ∆P(λ) =Pk

i=0λi∆Aisuch that|||∆P|||s= δ(n−1)s (P) andCk(n−1)(P+ ∆P) is rank deficient. Forxi ∈Cn, i= 0, . . . , k(n−1),let x=

h

xT0 · · · xTk(n−1) iT

6= 0 such thatCk(n−1)(P + ∆P)x= 0.Then x0 and xk(n−1)

are non zero vectors as otherwise in the first case ˆx = h

xT1 · · · xTk(n−1) iT

and in the second case ˆx=h

xT0 · · · xTk(n−1)−1 iT

are non zero vectors in the null space of Ck(n−1)−1(P + ∆P) implying thatj0 =k(n−1)−1 asγk(n−1)(s) =|||∆P|||sk(n−1)−1(s) , which is impossible. Therefore k(n−1) is a right minimal index of (P + ∆P)(λ) so that 0 is a left minimal index of (P + ∆P)(λ) and δs(n−1)(P) = |||∆P|||s = η0(s). The proof for the case that i0 =k(n−1) follows by replacing the convolution matrices in the preceding arguments by those with respect to (P + ∆P)(λ)T. In particular γ0(F) = σmin

h

AT0 · · · ATk iT

and η0(F) = σminh

A0 · · · Ak i

. This completes the proof of part (a).

To prove part (b), let S be the collection of all singular matrix polynomials (P + ∆P)(λ) = Pk

i=0λi(Ai + ∆Ai), such that |||∆P|||s = δs(n−1)(P). Clearly there exists a perturbation ∆P(λ) to P(λ) such that |||∆P|||s = γj(s)

0 . Then (P + ∆P)(λ) is singular and one of the numbers {0, . . . , j0} is a right minimal index. Since γj(s)0s(n−1)(P),(P+∆P)(λ)∈ S.Ifj0 is not a right minimal index of (P+∆P)(λ),

then Cj(P + ∆P) is singular for some j < j0 and γj(s) 6 |||∆P|||s = γj(s)0 . But this contradicts the minimality of j0. Suppose that there exists a matrix polynomial (P +∆Pd)(λ) in S such that its smallest right minimal index is ˆj and ˆj < j0. Then Cˆj(P +∆Pd)x = 0 for x = h

xT0 · · · xTˆj iT

where xi ∈ Cn, i = 0, . . . ,ˆj and x0 and xˆj are nonzero. This is because otherwise (P +∆Pd)(λ) has a right minimal index that is strictly less then ˆj, which is impossible. Then γˆ(s)

j 6 |||d∆P|||s = γj(s)0 which again contradicts the minimality of j0.This completes the proof of part (b).

From the definitions ofj0 andi0 it is clear thatδs(n−1)(P) = γj(s)0i(s)0 .Therefore to prove part (c), we establish that δs(n−1)(P) =γk(n−1)−i(s)

0k(n−1)−j(s)

0. By part (b) there exist ∆P(λ) = Pk

i=0λi∆Ai such that |||∆P|||s = δs(n−1)(P) and (P + ∆P)(λ) has a right minimal index j0. So (P + ∆P)(λ) has a vector polynomial of degree at most k(n−1)−j0 in Nl(P + ∆P). Therefore |||∆P|||s > ηk(n−1)−j(s)

0. But this inequality cannot be strict as|||∆P|||s(n−1)s (P).

The proof ofδ(n−1)s (P) =γk(n−1)−i(s)

0 follows from similar arguments due to part (b) which assures the existence of a matrix polynomial ∆P(λ) = Pk

i=0λi∆Ai such that (P + ∆P)(λ) has a left minimal index i0 and |||∆P|||s = δ(n−1)s (P). This completes the proof of part (c).

From parts (b) and (c) it is clear that any singular matrix polynomial (P+ ∆P)(λ) = Pk

i=0λi(Ai+ ∆Ai) such that|||∆P|||ss(n−1)(P),has vector polyno- mials of degree at mostk(n−1)−j0andk(n−1)−i0inNl(P+∆P) andNr(P+∆P) respectively. This proves part (d).

To prove part (e), observe that if anyone or both j0 and i0 is either 0 or k(n− 1), then the statement holds trivially due to part (a) and the fact that in such a case all the numbers in at least one of the γ-sequence and η-sequence are equal.

Therefore without loss of generality it may be assumed that 16j0, i0 6k(n−1)−1.

Suppose 1 6 j0 6 q. As the γ-sequence is non increasing and δ(n−1)s (P) = γj(s)0 , it follows that δ(n−1)s (P) = γq(s). If j0 > q, then k(n −1)−j0 6 q and by part (c), δs(n−1)(P) = ηk(n−1)−j(s)

0 > η(s)q . As this inequality cannot be strict, therefore in this case δ(n−1)s (P) = ηq(s). If 1 6 i0, j0 6 q, then as γ-sequence and η-sequence are non increasing, evidentlyδs(n−1)(P) =γq(s)q(s).Conversely, suppose δs(n−1)(P) = γq(s)= η(s)q , and if possible suppose that at least one among i0 and j0 is greater than q. As δs(n−1)(P) = minn

γq(s), ηq(s)

o

,only one of them, sayi0, is greater thanq.Then clearly η(s)q > η(s)i0 = δs(n−1)(P), which contradicts the assumption that δs(n−1)(P) = η(s)q . Similarly, if j0 > q, then it is easy to see that γq(s)> γj(s)

0s(n−1)(P),leading to the same contradiction. This completes the proves part (e).

Remark 5.4.2. Observe that Theorem 5.4.1 holds for any choice of norm on the ma- trix polynomials. In [12] it was established that δ(n−1)F (P) =γk(n−1)(F) = min

06j6k(n−1)γ(F)j when P(λ) is a matrix pencil. Part (e) of Theorem 5.4.1 is a modified form of this result that also holds for the matrix polynomials. It may be utilized to formu- late the computation of the distance to a nearest singular matrix polynomial as an optimization.

The next result formulates the computation of δs(n−1)(P) for s = 2, F in terms of a variable projection least squares problem. For s =F, the formulation is based on part (e) of Theorem 5.4.1. Since a suitable variable projection least squares formulation for computing theη-sequence is not available for s= 2,the formulation for this case is based on Corollary 5.2.1.

Theorem 5.4.3. Let P(λ) = Pk

i=0λiAi be an n×n matrix polynomial of degree k. For q ∈ n

k(n−1),jk(n−1)

2

ko

, let Sq be the collection of all non zero vectors in C(q+1)n. Suppose that the vectors x ∈ Sq are partitioned as x = h

xT0 · · · xTq iT

∈ C(q+1)n where xi ∈Cn, i= 0, . . . , q. Let

Xq =

x0 x1 · · · xk · · · xq x0 x1 · · · xk · · · xq

. .. ... ... ... ... ... ...

x0 x1 · · · xk · · · xq

 .

Then for q=jk(n−1)

2

k , δ(n−1)F (P) = min

x∈Sinfq h

A0 · · · Ak i

XqXq F, inf

x∈Sq

h

AT0 · · · ATk i

XqXq F

(5.4.3) and for q =k(n−1),

δ(n−1)2 (P) = inf

x∈Sq

h

A0 · · · Ak

i XqXq

2. (5.4.4)

Proof. By Theorem 5.4.1(e), δF(n−1)(P) = minn

γq(F), ηq(F)

o

, where q = jk(n−1)

2

k , γq(F) and η(Fq ) being as in (5.4.1) and (5.4.2) respectively. Let ∆P(λ) =Pk

i=0λi∆Ai such that Cq(P + ∆P) is rank deficient. Then Cq(P + ∆P)x = 0 for some x ∈ Sq. By arguing as in the proof of Theorem 5.3.1, this is equivalent to

h

∆A0 · · · ∆Ak i

Xq =−h

A0 · · · Ak i

Xq.

Therefore, by Theorem 1.5.1, a choice of h

∆A0 · · · ∆Ak i

satisfying the above equation that is also minimal with respect to the Frobenius norm is given by

h

∆A0 · · · ∆Ak i

=−h

A0 · · · Ak i

XqXq. This implies that

γq(F) = inf

x∈Sq

h

A0 · · · Ak i

XqXq F

. Similarly, it follows that

η(Fq ) = inf

x∈Sq

h

AT0 · · · ATk i

XqXq F

. This establishes (5.4.3).

By Corollary 5.2.1, δ(n−1)2 (P) = inf{|||∆P|||2 :Ck(n−1)(P+ ∆P) is rank deficient}.

Therefore, (5.4.4) may be established via identical arguments.

The details concerning the strategy for computing the distance to singularity based on the above theorem are discussed in Section 5.7. For the Frobenius norm it is shown to be numerical advantageous over the computation based on the result in [12].

Alternatively by using Theorem 5.2.4, the distance to singularity may be formu- lated in terms of another optimization also involving a variable projection.

Theorem 5.4.4. Let P(λ) = Pk

i=0λiAi be an n×n matrix polynomial of degree k.

Then for

X =

x0 x1 x2 · · · xk · · · xkn x0 x1 · · · xk−1 · · · xkn−1

x0 · · · xk−2 · · · xkn−2

. .. ... ... ... x0 . . . xkn−k

where xi ∈Cn, i= 0, . . . , kn, and x0 6= 0, δs(n−1)(P) = inf

xiCn x06=0

h

A0 · · · Ak i

XX

s, s= 2, F. (5.4.5)

Proof. Let (P + ∆P)(λ) be singular where ∆P(λ) = Pk

i=0λi∆Ai. Then by Theo- rem 5.2.4,

kn+1(P+ ∆P)

 x0 x1

... xkn

=

A0+ ∆A0 ... . ..

Ak+ ∆Ak . ..

. .. . ..

Ak+ ∆Ak · · · A0+ ∆A0

 x0 x1

... xkn

= 0

where xi ∈Cn,06i6kn, x0 6= 0. This can be written as h

A0+ ∆A0 · · · Ak+ ∆Ak i

X = 0,

where X =

x0 x1 x2 · · · xk · · · xkn

x0 x1 · · · xk−1 · · · xkn−1

x0 · · · xk−2 · · · xkn−2

. .. ... ... ... x0 . . . xkn−k

with x0 6= 0.

This implies h

∆A0 · · · ∆Ak i

X =−h

A0 · · · Ak i

X and by Theorem 1.5.1, a minimum 2 or Frobenius norm solution of this equation is given by

h

∆A0 · · · ∆Ak i

=−h

A0 · · · Ak i

XX. Therefore

δs(n−1)(P) = inf

xiCn x06=0

h

A0 · · · Ak i

XX s

, s= 2 or F.

This completes the proof.

Remark 5.4.5. In Theorem 4.4.1 it was shown that the distance with respect to the norms |||·|||s, s= 2, F, from an n×n matrix polynomial P(λ) =Pk

i=0λiAi of degree k to a nearest matrix polynomial with a Jordan chain of length kn corresponding to zero is given by

xiinfCn x06=0

h

A0 · · · Ak iXˆXˆ

s

where

Xˆ =

x0 x1 x2 · · · xk · · · xkn−1

x0 x1 · · · xk−1 · · · xkn−2 x0 · · · xk−2 · · · xkn−3

. .. ... ... ... x0 . . . xkn−k−1

 .

Observe that the matrix Xˆ is obtained by removing the last column of the matrix X involved in the projection XX in (5.4.5). This suggests an interesting relationship between the two distances.