• Tidak ada hasil yang ditemukan

6.2 Condition number of Fiedler pencils

6.2.1 Comparison of Condition Numbers for QEP

Consider the quadratic eigenvalue problem (QEP) Q(λ)x= λ2A2+λA1 +A0

x= 0, (6.11)

where A2, A1, and A0 ∈Cn×n. The companion linearizations of Q(λ) is given by L(λ)v =

λ

 A2 0 0 In

+

 A1 A0

−In 0

 λx x

= 0 (6.12)

and the left eigenvector of L(λ) is defined byu=

 In

(λA2+A1)

y, wherey is the left eigenvector of P(λ). Let

A0 =CB

be the full-rank decomposition, where C ∈ Rn×r, B ∈ Rr×n and r is the rank of A0. Then the QEP in (6.11) can be written as [37]

G(λ)x=

λA2+A1+C(λIn−0)−1B

x= 0. (6.13)

Then we have

C(λ)˜v =

λ

 A2

−In

+

 A1 C

B 0

 x s

= 0, (6.14)

where ev =

 x λ−1Bx

 is a right eigenvector of C(λ) and ue =

 y (λ−1C)y

 is a left eigenvector of C(λ).

The following result compares condition number of the quadratic eigenvalue problem with that of the companion form given in (6.14).

Theorem 6.2.8. Let Q(λ) be as in (6.11) and C(λ) be as in (6.14). Then

|λ|

p1 +|λ|2ku˜k2kv˜k2 ≤ cond(λ,C)

cond(λ, Q) ≤ ku˜k2kv˜k2

and

|λ| p1 +|λ|2

1 + 1

|λ|2σmin2 (B) 1 + 1

|λ|2σ2min(C) 1/2

cond(λ,C) cond(λ, Q)

1 + 1

|λ|2σ2max(B) 1 + 1

|λ|2σmax2 (C) 1/2

.

Proof. Letxand ybe the left and right eigenvector ofQ(λ) corresponding to the eigen- value λ withkxk= 1 and kyk= 1. Then

cond(λ,C)

cond(λ, Q) = k(1, λ)k2ku˜k2kv˜k2

|yλG(λ)x|

|yλQ(λ)x| k(1, λ, λ2)k2kxk2kyk2

= k(1, λ)k2ku˜k2kv˜k2 k(1, λ, λ2)k2

|yλQ(λ)x|

|yλG(λ)x|

= k(1, λ)k2ku˜k2kv˜k2

k(1, λ, λ2)k2 |λ|

= k(λ, λ2)k2ku˜k2kv˜k2

k(1, λ, λ2)k2

Now k(λ, λ2)k22

k(1, λ, λ2)k22

= |λ|2+|λ|4

1 +|λ|2+|λ|4 ≤1 and

k(λ, λ2)k22

k(1, λ, λ2)k22

= |λ|2+|λ|4

1 +|λ|2+|λ|4 = 1− 1

1 +|λ|2+|λ|4 ≥1− 1

1 +|λ|2 = |λ|2 1 +|λ|2.

So |λ|

p1 +|λ|2 ≤ k(λ, λ2)k2

k(1, λ, λ2)k2 ≤1.

Hence

|λ|

p1 +|λ|2ku˜k2kv˜k2 ≤ cond(λ,C)

cond(λ, Q) ≤ ku˜k2k˜vk2.

Since ˜u and ˜v are left and right eigenvector of C(λ) given in (6.14), we have s

1 + 1

|λ|2σmin2 (C)≤ ku˜k2 ≤ s

1 + 1

|λ|2σ2max(C), s

1 + 1

|λ|2σmin2 (B)≤ kv˜k2 ≤ s

1 + 1

|λ|2σmax2 (B).

Consequently,

p |λ|

1 +|λ|2

1 + 1

|λ|2σmin2 (B) 1 + 1

|λ|2σmin2 (C) 1/2

cond(λ,C) cond(λ, Q)

1 + 1

|λ|2σ2max(B) 1 + 1

|λ|2σ2max(C) 1/2

.

Theorem 6.2.9. [15] Let λ be a simple, finite, and nonzero eigenvalue of the quadratic matrix polynomialQ(λ) given in (6.11)andL be the corresponding companion lineariza- tion given in (6.12). Then

kuk2

≤ cond(λ, L)

≤ 2

√ kuk2.

Theorem 6.2.10. LetG(λ)be given in (6.13) andC(λ)be the corresponding companion linearization given in (6.14). Let λ be a simple, finite, and nonzero eigenvalue.Then

[(|λ|2min2 (B))(|λ|2min2 (C))]1/2

[(|λ|22max(B))(|λ|2max2 (C))]1/2 ≤ cond(λ,C)

cond(λ, G) ≤ [(|λ|2max2 (B))(|λ|2max2 (C))]1/2 [|λ|4min2 (B)σmin2 (C)]1/2 . Proof. By Theorem 6.2.7, we have

ˆ

s.ku˜k2kv˜k2≤ cond(λ,C)

cond(λ, G) ≤s1.ku˜k2kv˜k2, where

ˆ

s= k(1, λ)k2

[k(1, λ)k22max2−1B) +σmax2−1C) +k(1, λ)k22max2−1B)σ2max−1C))]1/2,

s1 = k(1, λ)k2

[k(1, λ)k22min2−1B) +σmin2−1C) +k(1, λ)k222min−1B)σ2min−1C))]1/2. (6.15) Now

ˆ

s= 1

h

1 + σ2max−1kB)+σ(1,λ)k2max2 −1C)

2max2−1B)σmax2−1C)i1/2

≥ 1

[1 +σmax21B) +σmax21C) +σmax21B)σmax21C)]1/2

= 1

[(1 +σ2max−1B))(1 +σmax2−1C))]1/2 and

ku˜k2kv˜k2

1 +σmin21B)

1 +σmin21C)1/2

. Hence

cond(λ,C)

cond(λ, G) ≥ [(1 +σmin21B))(1 +σmin21C))]1/2 [(1 +σ2max1B))(1 +σmax21C))]1/2

⇒ cond(λ,C)

cond(λ, G) ≥ [(|λ|2min2 (B))(|λ|22min(C))]1/2 [(|λ|2max2 (B))(|λ|22max(C))]1/2. Now, from equation (6.15) we have

h 1

1 + σ2min−1kB)+σ(1,λ)k2min2 −1C)

2min2−1B).σmin2−1C)i1/2.

Note that f(t) := [(1+α)tt2+β]1/2 is strictly increasing function on [0,∞) and f(t)→ 1+α1 as t→ ∞. Hence

1

(1 +α+β)1/2 ≤f(t)≤ 1 (1 +α)1/2.

This shows that

1

[1 +σ2min−1B)σ2min−1C) +σ2min−1B) +σ2min−1C)]1/2 s1 1 q

1 +σ2min−1B)σmin2 −1C)

Now

ku˜k2kv˜k2 ≤p

(1 +σmax21B)) (1 +σ2max1C)).

Consequently,

cond(λ,C)

cond(λ, G) ≤ [(1 +σmax21B)) (1 +σmax21C))]1/2 [1 +σ2min1B)σmin21C)]1/2

= [(|λ|2max2 (B)) (|λ|2max2 (C))]1/2 [|λ|4min2 (B)σmin2 (C)]1/2 . This completes the proof.

Theorem 6.2.11. LetG(λ)be given in (6.13) andC(λ)be the corresponding companion linearization given in (6.14). Let λ be a simple, finite, and nonzero eigenvalue. Then

1≤ cond(λ,C) cond(λ, G) ≤

"

1 +

1

|λ|2kBxk22+ |λ1|2kCyk22

1 + |λ1|2kBxk22 1

|λ|2kCyk22

#1/2

.

Proof. Letxand ybe the left and right eigenvector ofG(λ) corresponding to the eigen- value λ withkxk= 1 and kyk= 1. Then

cond(λ,C)

cond(λ, G) = k(1, λ)k2ku˜k2kv˜k2

[k(1, λ)k22+k(λ1Bx)k22+k(λ1C)yk22+k(1, λ)k22(k(λ1Bx)k22k(λ1C)yk22)]1/2. Nowkeuk2 = [kxk22+kλ−1Bxk22]1/2 =h

1 + |λ1|2kBxk22

i1/2

andkevk2 =h

1 + |λ1|2kCyk22

i1/2

.

cond(λ,C)

cond(λ, G) = k(1, λ)k2

1 + |λ1|2kBxk22

1/2

1 + |λ1|2kCyk22

1/2

hk(1, λ)k22+|λ|12kBxk22+|λ|12kCyk22+ (1 +|λ|2)

1

|λ|2kBxk22.|λ|12kCyk22

i1/2

=

1 + |λ|12kBxk22

1/2

1 + |λ|12kCyk22

1/2

1 +

1

|λ|2kBxk22+ 1

|λ|2kCyk22 k(1,λ)k22 +

1

|λ|2kBxk22.|λ|12kCyk22

1/2 (6.16)

1 + |λ|12kBxk22

1/2

1 + |λ|12kCyk22

1/2

h

1 + 1 kBxk2+ 1 kCyk2+

1 kBxk2. 1 kCyk2i1/2

=

1 + |λ1|2kBxk22

1/2

1 + |λ1|2kCyk22

1/2

1 + |λ|12kBxk22

1/2

1 + |λ|12kCyk22

1/2 = 1.

So cond(λ,C)

cond(λ, G) ≥1.

Again from equation (6.16) we have cond(λ,C)

cond(λ, G) ≤

1 + |λ1|2kBxk22

1/2

1 + |λ1|2kCyk22

1/2

h1 + |λ|12kBxk22.|λ|12kCyk22

i1/2

= h

1 + |λ1|2kBxk22+|λ1|2kCyk22+|λ1|2kBxk22 1

|λ|2kCyk22

i1/2

h1 + |λ1|2kBxk22 1

|λ|2kCyk22

i1/2

=

"

1 +

1

|λ|2kBxk22+|λ1|2kCyk22

1 + |λ1|2kBxk22 1

|λ|2kCyk22

#1/2

.

The following result compares the condition number of Q(λ) given in (6.11) with that of G(λ) given in (6.13).

Theorem 6.2.12. Let Q(λ) be given in (6.11) and G(λ) given in (6.13). Let λ be a simple, finite, and nonzero eigenvalue. Then

min2 (C) +σmin2 (B)

k(1, λ, λ2)k2 ≤ cond(λ, G) cond(λ, Q) ≤

1 +σmax2−1B)

1 +σmax2−1C)1/2

. Proof. We have cond(λ,G)cond(λ,Q)

= [k(1, λ)k22+k(λ1Bx)k2+k(λ1C)yk2+ (1 +|λ|2)(k(λ1Bx)k2k(λ1C)yk2)]1/2 k(1, λ, λ2)k2kxk2kyk2

.|yλQ(λ)x|

|yλG(λ)x| (6.17)

= [k(1, λ)k22+kλ1Bxk22+k(λ1C)yk22+ (1 +|λ|2)(kλ1Bxk22k(λ1C)yk22)]1/2

k(1, λ, λ2)k2 |λ|.

Hence

s≤ cond(λ, G) cond(λ, Q) ≤s1,

where

s= |λ|[k(1, λ)k22min2−1B) +σ2min−1C) + (1 +|λ|2)(σmin2−1B)σmin2−1C))]1/2

k(1, λ, λ2)k2 ,

s1 = |λ|[k(1, λ)k22max21B) +σmax21C) + (1 +|λ|2)(σmax21B)σmax21C))]1/2 k(1, λ, λ2)k2

= |λ|k(1, λ)k2

h

1 + σmax2 −1k(1,λ)kB)+σ2max2 −1C)

2 + (σmax2−1B)σmax2−1C))i1/2

k(1, λ, λ2)k2

≤ |λ|k(1, λ)k2

(1, λ, λ2)k2

1 +σmax2−1B) +σ2max−1C) + (σmax2−1B)σ2max−1C))1/2

= |λ|k(1, λ)k2

(1, λ, λ2)k2

(1 +σmax21B))(1 +σmax21C))1/2

. Now

|λ|k(1, λ)k2

(1, λ, λ2)k2 = |λ|p

1 +|λ|2 p1 +|λ|2+|λ|4 =

|λ|2+|λ|4 1 +|λ|2+|λ|4

1/2

≤1

So cond(λ, G)

cond(λ, Q) ≤

(1 +σmax21B))(1 +σmax21C))1/2

Now

s= k(λ, λ2)k2

k(1, λ, λ2)k2

1 + σmin21B) +σ2min1C) k(1, λ)k22

2min−1B).σ2min−1C) 1/2

. Thusp

σ2min(C) +σmin2 (B)

k(1, λ, λ2)k2 ≤ cond(λ, G) cond(λ, Q) ≤

1 +σ2max1B)

1 +σmax21C)1/2

.

Example 6.5 (Loaded elastic string, [37]). Consider the rational eigenvalue problem

A1−λA2 + λ λ−σE

x= 0, (6.18)

where A1 and A2 are tridiagonal and symmetric positive definite, andE =eneTn is the last column of the identity matrix and σ >0 is a parameter. Then we have

G(λ)x=

"

A1+eneTn −λA2 +en λ

σ −1 −1

eTn

#

x = 0, (6.19)

and C1(λ)v = (λX − Y)v =

λ

 A2

−

 −(A1+eneTn) −en

 x

= 0. (6.20)

The quadratic eigenvalue problem of (6.18) is given by

Q(λ)x= [λ2A2−λ(A1+σA2+eneTn) +σA1]x= 0, σ > 0, (6.21) Here

σmin

λ δ −1

1

eTn

!

max

λ δ −1

1

eTn

!

= 1

λδ −1 =σmin en

λ δ −1

1!

max en

λ δ −1

1!

By Theorem 6.2.7, we have s=s1. So cond(λ, C1)

cond(λ, G) =s.kuk2

kyk2 kvk2

kxk2, where

s= k(1, λ)kq

[k(1, λ)kqq+|λδ −1|q+|λδ −1|q+ (1 +|λ|q) |λδ −1|2q ]1/q

= (1 +|λ|q)1/q

[1 +|λ|q+|λ−δ|δq q + |λ−δ|δq q + (1 +|λ|q)

δ2q

|λ−δ|2q

]1/q

= (1 +|λ|q)1/q

[(1 +|λ|q) + 2|λδδ|q+ (1 +|λ|q) |λδδ|2q|

]1/q ≤1

and 1 ≤ kukkyk22kkvkxk22 ≤ σmax

 In (en(λδ −1)1)

σmax

 In (λδ −1)1eTn

v

 ≤ 1 +

|λ/δ−1|2. This shows that

cond(λ, C1)

cond(λ, G) ≤1 +|λ/δ−1|2.

Next, consideringG(λ) andQ(λ) given in (6.19) and (6.21), respectively, by Theorem 6.2.12, we have

cond(λ, G) cond(λ, Q) ≤

1 + σ2

|λ−σ|2

.

7

Conclusions

In this thesis, we have developed a framework for solutions of rational eigenvalue problems. We have achieved this goal by reformulating a rational eigenvalue prob- lem G(λ)u = 0 to that of computing transmission zeros and zero directions of a linear time invariant (LTI) system Σ given by

Ex(t) =˙ Ax(t) +Bu(t) y(t) =Cx(t) +P(dtd)u(t)

for which G(λ) is the transfer function. Since eigenvalues of G(λ) form a subset of the transmission zeros of the LTI system Σ and the transmission zeros form a subset of the invariant zeros of the system Σ, we have focused on computing invariant zeros and zero directions of the LTI system which are in fact eigenvalues and eigenvectors of the Rosenbrock system matrix S(λ) given by

S(λ) =

 P(λ) C B (A−λE)

.

Hence we have developed a framework for computing eigenvalues and eigenvectors of the Rosenbrock system matrix S(λ).For this purpose, we have introduced three families of linearizations - which we referred to as Fiedler pencils, Generalized Fiedler (GF) pencils and Generalized Fiedler pencils with repetition (GFPR) - of the Rosenbrock system matrix S(λ). Thus invariant zeros of the LTI system Σ could be computed by solving generalized eigenvalue problems for the Fiedler pencils.

We have described construction of Fiedler pencils of the Rosenbrock system matrix S(λ) and have shown that these Fiedler pencils are linearizations for S(λ).Further, we have shown that the Fiedler pencils of S(λ) are linearizations of G(λ) when the LTI

realization of the transfer function G(λ).Furthermore, we have shown that GF pencils and GFPR could be utilized to obtain structure (such as symmetric and Hermitian) preserving linearizations of the Rosenbrock system matrix.

We have also described eigenvector recovery of G(λ) from that of Fiedler pencils of the system matrix S(λ). In fact, we have shown that the eigenvectors of G(λ) could be recovered from those of the Fiedler pencils of S(λ) without incurring any additional computational cost. Further, we have shown that the state-space framework so devel- oped could be utilized to linearizea higher order LTI state-space system so as to analyze and solve the higher order LTI system. Furthermore, we have shown that the linearized systems so obtained are strict system equivalent to the higher order systems and hence preserve system characteristics of the original systems.

We have also developed a framework for sensitivity analysis of eigenvalues of the system matrixS(λ) and the transfer functionG(λ).We have defined condition numbers for simple eigenvalues ofS(λ) andG(λ),and have obtained explicit computable expres- sions for the condition numbers. We have also analyzed the effect of linearization on the conditioning of the eigenvalues of the system matrix S(λ).

[1] Sk. Safique Ahmad and Rafikul Alam. On the conditioning of polynomial eigen- problems. Preprint.

[2] Sk. Safique Ahmad and Rafikul Alam. Pseudospectra, critical points and multiple eigenvalues of matrix polynomials. Linear Algebra Appl., 430(4):1171–1195, 2009.

[3] E. N. Antoniou and S. Vologiannidis. A new family of companion forms of polyno- mial matrices. Electron. J. Linear Algebra, 11:78–87, 2004.

[4] Athanasios C. Antoulas. Approximation of large-scale dynamical systems. SIAM, Philadelphia, PA, 2005.

[5] Panos J. Antsaklis and Anthony N. Michel. Linear systems. Birkh¨auser Boston Inc., Boston, MA, 2006.

[6] Mar´ıa I. Bueno, Fernando de Ter´an, and Froil´an M. Dopico. Recovery of eigenvec- tors and minimal bases of matrix polynomials from generalized Fiedler lineariza- tions. SIAM J. Matrix Anal. Appl., 32(2):463–483, 2011.

[7] C. Conca, J. Planchard, and M. Vanninathan. Existence and location of eigenvalues for fluid-solid structures. Comput. Methods Appl. Mech. Engrg., 77(3):253–291, 1989.

[8] Biswa Nath Datta.Numerical methods for linear control systems. Elsevier Academic Press, San Diego, CA, 2004.

[9] Fernando De Ter´an, Froil´an M. Dopico, and D. Steven Mackey. Fiedler companion linearizations and the recovery of minimal indices. SIAM J. Matrix Anal. Appl., 31(4):2181–2204, 2009/10.

[10] D.S.Mackey. Structured linearizations for matrix polynomials. PhD Thesis, The University of Manchester, U.K, 2006.

[11] I. Gohberg, P. Lancaster, and L. Rodman. Matrix polynomials. Academic Press, Inc., New York, 1982.

[12] Israel Gohberg, Peter Lancaster, and Leiba Rodman. Invariant subspaces of ma- trices with applications. SIAM, Philadelphia, PA, 2006.

[13] Jo˜ao P. Hespanha. Linear systems theory. Princeton University Press, Princeton, NJ, 2009.

[14] Nicholas J. Higham, D. Steven Mackey, Niloufer Mackey, and FranCcoise Tisseur.

Symmetric linearizations for matrix polynomials. SIAM J. Matrix Anal. Appl., 29(1):143–159 (electronic), 2006/07.

[15] Nicholas J. Higham, D. Steven Mackey, and FranCcoise Tisseur. The conditioning of linearizations of matrix polynomials. SIAM J. Matrix Anal. Appl., 28(4):1005–

1028, 2006.

[16] Kenneth Hoffman and Ray Kunze. Linear algebra. Second edition. Prentice-Hall, Inc., Englewood Cliffs, N.J., 1971.

[17] Tsung-Min Hwang, Wen-Wei Lin, Wei-Cheng Wang, and Weichung Wang. Nu- merical simulation of three dimensional pyramid quantum dot. J. Comput. Phys., 196(1):208–232, 2004.

[18] Maria I.Bueno and Fernando De Teran. Eigenvectors and minimal bases for some families of Fiedler-like linearizations. Linear and Multilinear Algebra, pages 1–24, 2013.

[19] M.Sc. Viatcheslav I.Sokolov. Contributions to the minimal realization problem for descriptor systems. PhD thesis, TU Chemnitz, 2006.

[20] Elias Jarlebring and Heinrich Voss. Rational Krylov for nonlinear eigenproblems, an iterative projection method. Appl. Math., 50(6):543–554, 2005.

[21] Thomas Kailath. Linear systems. Prentice-Hall Inc., Englewood Cliffs, N.J., 1980.

[22] Uwe Mackenroth. Robust control systems. Springer-Verlag, Berlin, 2004.

[23] D. Steven Mackey, Niloufer Mackey, Christian Mehl, and Volker Mehrmann. Struc- tured polynomial eigenvalue problems: good vibrations from good linearizations.

SIAM J. Matrix Anal. Appl., 28(4):1029–1051, 2006.

[24] D. Steven Mackey, Niloufer Mackey, Christian Mehl, and Volker Mehrmann. Vec- tor spaces of linearizations for matrix polynomials. SIAM J. Matrix Anal. Appl., 28(4):971–1004, 2006.

[25] Lada Mazurenko and Heinrich Voss. On the number of eigenvalues of a rational eigenproblem. SIAM J. Numer. Anal, 2003.

[26] Lada Mazurenko and Heinrich Voss. Low rank rational perturbations of linear symmetric eigenproblems. ZAMM Z. Angew. Math. Mech., 86(8):606–616, 2006.

[27] Volker Mehrmann and Heinrich Voss. Nonlinear eigenvalue problems: a challenge for modern eigenvalue methods.GAMM Mitt. Ges. Angew. Math. Mech., 27(2):121–

152 (2005), 2004.

[28] Wim Michiels and Silviu-Iulian Niculescu. Stability and stabilization of time-delay systems. SIAM, Philadelphia, PA, 2007.

[29] J. Planchard. Eigenfrequencies of a tube bundle placed in a confined fluid. Comput.

Methods Appl. Mech. Engrg., 30(1):75–93, 1982.

[30] H. H. Rosenbrock. State-space and multivariable theory. John Wiley & Sons, Inc., New York, 1970.

[31] Axel Ruhe. Algorithms for the nonlinear eigenvalue problem. SIAM J. Numer.

Anal., 10:674–689, 1973.

[32] R. A. Smith. The condition numbers of the matrix eigenvalue problem. Numer.

Math., 10:232–240, 1967.

[33] S. I. Solov’ev and S. I. Solov v. Eigenvibrations of a plate with elastically attached load, 2003.

[34] Sergey I. Solov0¨ev. Preconditioned iterative methods for a class of nonlinear eigen- value problems. Linear Algebra Appl., 415(1):210–229, 2006.

[35] Willi-Hans Steeb. Matrix calculus and Kronecker product with applications and C++ programs. World Scientific Publishing Co., Inc., River Edge, NJ, 1997.

[36] G. W. Stewart and Ji Guang Sun. Matrix perturbation theory. Academic Press Inc., Boston, MA, 1990.

[37] Yangfeng Su and Zhaojun Bai. Solving rational eigenvalue problems via lineariza- tion. SIAM J. Matrix Anal. Appl., 32(1):201–216, 2011.

[38] Volker Mehrmann Christian Schr Ader and Fran Axoise Tisseur Timo Betcke, Nicholas J. Higham. Nlevp: A collection of nonlinear eigenvalue problems, 2013.

Dokumen terkait