As a consequence of Theorem 6.2.4 and Theorem 5.5.8, we have the following result for minimal bases and minimal indices of G(λ).
Theorem 6.2.5. Consider the PGF pencil Tω(λ) := λMS−ω1 −MSω0 of G(λ) associ- ated with a proper permutation ω := (ω0, ω1) of {0,1, . . . , m}. Suppose that CIP(ω0) = (c0, i0). Let ω1 be given by ω1 := (σ1, m, σ2). Let iT := i(rev(σ1), ω0, rev(σ2)) and cT := c(rev(σ1), ω0, rev(σ2)) be the total number of inversions and consecutions of (rev(σ1), ω0, rev(σ2)), respectively. Let wi(λ) :=
ui(λ) vi(λ)
∈ C[λ]mn+r, where ui(λ) ∈ C[λ]mn and vi(λ)∈C[λ]r for i= 1 : p. Then we have the following.
(a) Right minimal bases. If w1(λ), . . . , wp(λ)
is a right minimal basis of Tω(λ) then
(
(eTm−c0⊗In)u1(λ), . . . ,(eTm−c
0⊗In)up(λ)
)
is a right minimal basis ofG(λ).Further, if ε1 ≤ · · · ≤εp are the right minimal indices of Tω(λ) then ε1−iT ≤ · · · ≤ εp−iT are the right minimal indices of G(λ).(b) Left minimal bases. If w1(λ), . . . , wp(λ)
is a left minimal basis of Tω(λ) then
(
(eTm−i0⊗In)u1(λ), . . . ,(eTm−i0⊗In)up(λ))
is a left minimal basis of G(λ). Further, if η1 ≤ · · · ≤ ηp are the left minimal indices of Tω(λ) then η1−cT ≤ · · · ≤ ηp−cT are the left minimal indices of G(λ).Theorem 6.3.2. LetL(λ) := MSτ1MSσ1(λMSτ−MSσ)MSσ2MSτ2 andL(λ) := MτP1MσP1(λMτP− MσP)MσP2MτP2 be FPRs of G(λ) and P(λ), respectively. Then
L(λ) =
L(λ) em−i0(σ1,σ)⊗C eTm−c
0(σ,σ2)⊗B A−λE
.
Thus, the map FPR(P) −→ FPR(G), L(λ) 7−→
L(λ) em−i0(σ1,σ)⊗C eTm−c
0(σ,σ2)⊗B A−λE
is a bijection, where FPR(P) and FPR(G) denote the set of FPRs of P(λ) and G(λ), respectively.
Proof. We will prove a more general result in Theorem 7.2.4.
We now describe the recovery of minimal bases and minimal indices of S(λ) from those of the FPRs ofS(λ). Recall from Definition 1.2.20 thatct(α) (resp.,it(α)) denotes the consecutions (resp., inversions) of an index tuple α at an index t.
Theorem 6.3.3. Let L(λ) := MSτ1MSσ1(λMSτ −MSσ)MSσ2MSτ2 be an FPR of S(λ). Let τ be given by τ := (τl,−m, τr). Set α := −rev(τl), σ,−rev(τr)
. Let cL := c(α) and iL := i(α) be the total number of consecutions and inversions of the permutation α of {0,1, . . . , m−1}. Then we have the following.
(I) Right minimal bases and right minimal indices.
(a) FFPRL (S) : Nr(L) → Nr(S),
u(λ) v(λ)
7→
(eTm−c
0(σ,σ2)⊗In)u(λ) v(λ)
, is a linear isomorphism and maps a minimal basis of Nr(L) to a minimal basis of Nr(S), where u(λ) ∈ C(λ)mn and v(λ) ∈C(λ)r. Thus, if w1(λ), . . . , wp(λ)
is a right minimal basis of L(λ) then
(
FFPRL (S)w1(λ), . . . ,FFPRL (S)wp(λ))
is a right minimal basis of S(λ).(b) Ifε1 ≤ · · · ≤εp are the right minimal indices of L(λ)then ε1−iL≤ · · · ≤εp−iL are the right minimal indices of S(λ).
(II) Left minimal bases and left minimal indices.
(c) KFPRL (S) : Nl(L) → Nl(S),
u(λ) v(λ)
7→
(eTm−i
0(σ1,σ)⊗In)u(λ) v(λ)
, is a linear iso- morphism and maps a minimal basis of Nl(L) to a minimal basis of Nl(S), where u(λ) ∈ C(λ)mn and v(λ) ∈ C(λ)r. Thus, if w1(λ), . . . , wp(λ)
is a left minimal ba- sis of L(λ) then
(
KFPRL (S)w1(λ), . . . ,KFPRL (S)wp(λ))
is a left minimal basis of S(λ).(d) If η1 ≤ · · · ≤ηp are the left minimal indices ofL(λ)then η1−cL ≤ · · · ≤ηp−cL are the left minimal indices of S(λ).
Proof. We haveL(λ) =MSτ1MSσ1Tω(λ)MSσ2MSτ2, whereTω(λ) := λMSτ−MSσ is a PGF pen- cil ofG(λ) associated with the permutationω := (σ,−τ) of{0,1, . . . , m}. SinceMSσ2MSτ2
is nonsingular, it is easily seen that the map MSσ2MSτ2 : Nr(L) → Nr(Tω), x(λ) 7→
MSσ2MSτ2
x(λ) is an isomorphism and maps a minimal basis ofNr(L) to a minimal basis ofNr(Tω). On the other hand, by Theorem 6.2.4,FPGFω (S) :Nr(Tω)→ Nr(S),
u(λ) v(λ)
7→
(eTm−c
0(σ)⊗In)u(λ) v(λ)
is an isomorphism and maps a minimal basis ofNr(Tω) to a min- imal basis of Nr(S), where u(λ) ∈ C(λ)mn and v(λ) ∈ C(λ)r. Hence FPGFω (S)MSσ2MSτ2 : Nr(L)→ Nr(S), y(λ)7→
(
FPGFω (S)MSσ2MSτ2)
y(λ) is an isomorphism and maps a minimal basis of Nr(L) to a minimal basis of Nr(S). Now, by Lemma 2.1.11, we haveFPGFω (S)MSσ2MSτ2 =
(eTm−c
0(σ)⊗In)MσP2MτP2 Ir
=
eTm−c
0(σ,σ2)⊗In
Ir
=FFPRL (S), which yields the desired result.
Now, let ε1 ≤ · · · ≤ εp be the right minimal indices of L(λ). Since the PGF pencil Tω(λ) is strictly equivalent to L(λ), ε1 ≤ · · · ≤εp are also the right minimal indices of Tω(λ). Hence by Theorem 6.2.4,ε1−iL≤ · · · ≤εp−iL are the right minimal indices of S(λ).
For the recovery of left minimal bases, observe that (MSτ1MSσ1)T :Nl(L)→ Nl(Tω), x(λ) 7→(MSτ1MSσ1)Tx(λ) is an isomorphism and maps a minimal basis of Nl(L) to a minimal basis of Nl(Tω). Again by Theorem 6.2.4, KPGFω (S) : Nl(Tω) → Nl(S),
u(λ) v(λ)
7−→
(eTm−i
0(σ)⊗In)u(λ) v(λ)
is an isomorphism and maps a minimal basis ofNl(Tω) to a min- imal basis ofNl(S), whereu(λ)∈C(λ)mnand v(λ)∈C(λ)r. HenceKPGFω (S)(MSτ1MSσ1)T : Nl(L)→ Nl(S), y(λ)7→
(
KPGFω (S)(MSτ1MSσ1)T)
y(λ) is an isomorphism and maps a mini- mal basis of Nl(L) to a minimal basis of Nl(S). Now,KPGFω (S)(MSτ1MSσ1)T =
(eTm−i
0(σ)⊗In)(MτP1MσP1)T Ir
=
MτP1MσP1(em−i0(σ)⊗In)T
Ir
and by Lemma 2.1.11, we have MτP1MσP1(em−i0(σ)⊗In) = em−i0(σ1,σ) ⊗In, which yields the desired result.
Finally, let η1 ≤ · · · ≤ ηp be the left minimal indices of L(λ). Since the PGF pencil Tω(λ) is strictly equivalent to L(λ), η1 ≤ · · · ≤ ηp are also the left minimal indices of
Tω(λ). Hence by Theorem 6.2.4, η1−cL ≤ · · · ≤ηp−cL are the left minimal indices of S(λ). This completes the proof.
We have the following recovery rules for eigenvectors ofS(λ) when S(λ) is regular.
Theorem 6.3.4. Suppose that S(λ) is regular and µ∈C is an eigenvalue of S(λ). Let L(λ) := MSτ1MSσ1(λMSτ −MSσ)MSσ2MSτ2 be an FPR of S(λ). Then:
(a) Right eigenvectors. If
u v
∈ Nr(L(µ)) then
(eTm−c
0(σ,σ2)⊗In)u v
∈ Nr(S(µ)), where u ∈ Cmn and v ∈ Cr. In fact, the mapping
u v
7→
(eTm−c
0(σ,σ2)⊗In)u v
is a linear isomorphism between Nr(L(µ)) and Nr(S(µ)). Thus, if(w1, . . . , wp) is a basis of Nr(L(µ)) then
(
FFPRL (S)w1, . . . ,FFPRL (S)wp)
is a basis of Nr(S(µ)), where FFPRL is as given in Theorem 6.3.3.(b) Left eigenvectors. If
u v
∈ Nl(L(µ)) then
(eTm−i
0(σ1,σ)⊗In)u v
∈ Nl(S(µ)), where u ∈ Cmn and v ∈ Cr. In fact, the mapping
u v
7→
(eTm−i
0(σ1,σ)⊗In)u v
is a linear isomorphism between Nl(L(µ)) and Nl(S(µ)). Thus, if (w1, . . . , wp) is a basis of Nl(L(µ))then
(
KFPRL (S)w1, . . . ,KFPRL (S)wp)
is a basis ofNl(S(µ)), where KFPRL is as given in Theorem 6.3.3.Proof. A verbatim proof of Theorem 6.3.3 yields the desired results.
As a consequence of Theorem 6.3.3 and Theorem 5.5.8, we have the following recovery rules for minimal bases and minimal indices of G(λ) from those of the FPRs ofG(λ).
Theorem 6.3.5. Let L(λ) := MSτ1MSσ1(λMSτ −MSσ)MSσ2MSτ2 be an FPR of G(λ). Let τ be given by τ := (τl,−m, τr). Set α := (−rev(τl), σ,−rev(τr)). Let cL := c(α) and iL := i(α) be the total number of consecutions and inversions of the permutation α of {0,1, . . . , m− 1}. Let wi(λ) :=
ui(λ) vi(λ)
∈ C[λ]mn+r, where ui(λ) ∈ C[λ]mn and vi(λ)∈C[λ]r for i= 1 :p. Then we have the following.
(a) Right minimal bases. If w1(λ), . . . , wp(λ)
is a right minimal basis of L(λ) then
(
(eTm−c0(σ,σ2)⊗In)u1(λ), . . . ,(eTm−c
0(σ,σ2)⊗In)up(λ)
)
is a right minimal basis ofG(λ).Further, ifε1 ≤ · · · ≤εp are the right minimal indices ofL(λ)thenε1−iL ≤ · · · ≤εp−iL are the right minimal indices of G(λ).
(b) Left minimal bases. If w1(λ), . . . , wp(λ)
is a left minimal basis of L(λ) then
(
(eTm−i0(σ1,σ)⊗In)u1(λ), . . . ,(eTm−i
0(σ1,σ)⊗In)up(λ)
)
is a left minimal basis ofG(λ).Further, ifη1 ≤ · · · ≤ηp are the left minimal indices ofL(λ)thenη1−cL≤ · · · ≤ηp−cL are the left minimal indices of G(λ).
Similarly, we have the following recovery rules for eigenvectors of G(λ) from those of the FPRs of G(λ).
Theorem 6.3.6. Suppose that G(λ)is regular and µ∈C is an eigenvalue of G(λ). Let L(λ) := MSτ1MSσ1(λMSτ −MSσ)MSσ2MSτ2 be an FPR of G(λ). Let wi :=
ui
vi
∈ Cmn+r, where ui ∈Cmn and vi ∈Cr for i= 1 :p. Then we have the following.
(a) Right eigenvectors. If (w1, . . . , wp) is a basis of Nr(L(µ))then
(
(eTm−c0(σ,σ2)⊗ In)u1, . . . ,(eTm−c
0(σ,σ2)⊗In)up
)
is a basis of Nr(G(µ)).(b) Left eigenvectors. If (w1, . . . , wp) is a basis of Nl(L(µ)) then
(
(eTm−i0(σ1,σ) ⊗ In)u1, . . . ,(eTm−i
0(σ1,σ)⊗In)up
)
is a basis of Nl(G(µ)).Proof. Since L(λ) is a linearization of S(λ), we have eig(S) = eig(L).Again since S(λ) is irreducible, we have eig(G) ⊂ eig(S). Hence as a consequence of Theorem 6.3.4 and Theorem 5.5.8, the desired results follow.
We now illustrate eigenvector recovery rule for G(λ) by considering an example Example 6.3.7. Consider G(λ) := P4
i=0λiAi + C(λE − A)−1B. Now choose σ :=
(1,2,3,0), σ1 := ∅, σ2 := (2,1), τ := (−4), τ1 = ∅ and τ2 := ∅. The FPR of G(λ) associated with (σ1, σ, σ2) and (τ1, τ, τ2) is given by L(λ) := λMS−4−MS(1,2,3,0)
MS(2,1)=
λA4+A3 A2 A1 −In 0 A2 −λA2−In −λA1 λIn 0
A1 λIn A0 0 C
−In 0 λIn 0 0
0 0 B 0 A−λE
Let
u v
∈ Nr(L(λ)) and
x y
∈ Nl(L(λ)) be such that {u, x} ⊂ C4n and {v, y} ⊂Cr. We have c0(σ, σ2) = 1 and i0(σ1, σ) = 1. Hence by Theorem 6.3.4,
(eT4−1⊗In)u v
=
(eT3 ⊗In)u v
∈ Nr(S(λ)) and
(eT3 ⊗In)x y
∈ Nl(S(λ)). To verify the recovery rule, define ui := (eTi ⊗In)u and xi := (eTi ⊗In)x, for i= 1 : 4, and consider L(λ)
u v
= 0.
This gives
(λA4 +A3)u1+A2u2+A1u3−u4 = 0 (6.8) A2u1+ (−λA2−In)u2−λA1u3+λu4 = 0 (6.9) A1u1+λu2+A0u3+Cv = 0 (6.10)
−u1+λu3 = 0 (6.11)
Bu3 + (A−λE)v = 0 (6.12)
From (6.11) we have u1 =λu3. Now adding λ times (6.8) with (6.9) we have (λ2A4+ λA3+A2)u1−u2 = 0⇒u2 = (λ2A4+λA3+A2)u1 = (λ3A4+λ2A3+λA2)u3. Substituting the values of u1 and u2 in (6.10), we have (λ4A4+λ3A3+λ2A2+λA1+A0)u3+Cv = 0⇒P(λ)u3+Cv = 0.Hence by (6.12), we haveS(λ)
u3
v
= 0, i.e.,
u3
v
∈ Nr(S(λ)).
Similarly, we can verify that
x3
y
∈ Nl(S(λ)). It is easily seen that G(λ)u3 = 0 and xT3G(λ) = 0.
7
Structured Strong Linearizations of Structured Rational Matrices
Structured rational matrices such as symmetric, skew-symmetric, Hamiltonian, skew- Hamiltonian, Hermitian, skew-Hermitian, para-Hermitian and para-skew-Hermitian ra- tional matrices arise in many applications. For structured rational matrices, it is desir- able to construct structure-preserving linearizations so as to preserve the symmetry in the eigenvalues and poles of the rational matrices. The primary aim of this chapter is to construct structure-preserving Rosenbrock strong linearizations of structured (sym- metric, skew-symmetric, Hamiltonian, skew-Hamiltonian, Hermitian, skew-Hermitian, para-Hermitian and para-skew-Hermitian) rational matrices. For this purpose, we pro- pose an infinite family of Fiedler-like pencils (which we refer to as generalized Fiedler pencils with repetition (GFPRs)) and show that the family of GFPRs is a rich source of structure-preserving strong linearizations of structured rational matrices. We con- struct symmetric, skew-symmetric, Hamiltonian, skew-Hamiltonian, Hermitian, skew- Hermitian, para-Hermitian and para-skew-Hermitian strong linearizations of a rational matrixG(λ) whenG(λ) has the same structure. Further, whenG(λ) is real and symmet- ric, we show that the real symmetric linearizations ofG(λ) preserve the Cauchy-Maslov index of G(λ). We describe the recovery of eigenvectors, minimal bases and minimal indices of G(λ) from those of the linearizations of G(λ) and show that the recovery is operation-free. We also show that FPs, GFPs and FPRs of G(λ) constructed in Chapter 6 are Rosenbrock strong linearizations of G(λ).
7.1 Introduction
Structured rational matrices such as symmetric, skew-symmetric, Hamiltonian, skew- Hamiltonian, Hermitian, skew-Hermitian, para-Hermitian and para-skew-Hermitian ra- tional matrices arise in many applications, see [38, 41, 36, 32, 49, 55, 64, 35] and the references therein. For example, the Hermitian rational eigenvalue problem
G(λ)u:=
λ2M +K −
k
X
i=1
1
1 +λbi∆Ki u= 0
arises in the study of damped vibration of a structure, where M and K are positive definite, bi is a relaxation parameter and ∆Ki is an assemblage of element stiffness matrices [49, 55]. Also various structured rational matrices arise as transfer functions of linear time-invariant (LTI) systems, see [38, 41, 36, 32, 50, 64]. For structured rational matrices, it is desirable to construct structure-preserving linearizations so as to preserve the symmetry in the eigenvalues and poles of the rational matrices.
Our main aim in this chapter is to construct structure-preserving strong lineariza- tions of structured rational matrices and to recover eigenvectors, minimal bases and minimal indices of rational matrices from those of the linearizations. We consider the following structures:
symmetric : G(λ)T =G(λ) Hermitian : G(λ)∗=G(¯λ) skew-symmetric : G(λ)T =−G(λ) skew-Hermitian : G(λ)∗=−G(¯λ) Hamiltonian : G(λ)T =G(−λ) para-Hermitian : G(λ)∗=G(−¯λ) skew-Hamiltonian :G(λ)T =−G(−λ) para-skew-Hermitian :G(λ)∗=−G(−λ),¯
(7.1)
whereXT (resp., X∗) denotes the transpose (resp., conjugate transpose) of a matrix X and ¯λ denotes the conjugate of λ. For more on these structured rational matrices, we refer to [38, 41, 36, 49, 50, 64, 32, 55] and the references therein.
The Cauchy-Maslov index [23] (also known as the matrix Cauchy index [10]) of a real symmetric rational matrix plays an important role in applications such as in networks of linear systems, see [22, 39] and the references therein. If G(λ) is real symmetric then the Cauchy-Maslov index of G(λ) is defined as IndCM(G) := (# eigenvalues of G(λ) which jump from−∞to +∞) − (# eigenvalues ofG(λ) which jump from +∞to−∞) as the real parameterλ traverses from −∞ to +∞, see [10]. It is therefore desirable to construct real symmetric linearizations of G(λ) that preserve the Cauchy-Maslov index of G(λ).
We mention that there is a slight difference in the naming convention between some of the structured rational matrices and structured matrix polynomials. The Hamiltonian
(resp., skew-Hamiltonian) structure for rational matrices is known as T-even (resp.,T- odd) structure for matrix polynomials [45]. On the other hand, para-Hermitian (resp., para-skew-Hermitian) structure for rational matrices is known as ∗-even (rep., ∗-odd) structure for matrix polynomials [45]. We follow both the naming conventions in the rest of the chapter without any bias.