1.2 Polynomial and rational matrices
2.1.2 Eigenvalue at infinity and recovery of eigenvectors
thatc0(σ, σ2) = 2. Thus, by Corollary 2.1.13,(eT4−2⊗In)u=u2 ∈ Nr(P(λ)). It is easily verified that u2 6= 0and P(λ)u2 = 0 showing that u2 is indeed an eigenvector ofP(λ).
We now present an example to illustrate a case when our recovery rule coincides with that in Theorem 2.1.18.
Example 2.1.20. Let P(λ) := P7
i=0λiAi. Consider σ = (4 : 6,2 : 3,0 : 1), σ1 = (2,1,3), σ2 = (2,4,5,3,4), τ = (−7) and τ1 = τ2 = ∅. Clearly, rev(σ1) and σ2 are right index tuples of type-1 relative to rev(σ) and σ, respectively. Consider the FPR L(λ) := M(2,1,3)P λM−7P −M(4:6,2:3,0:1)P
M(4,5,2,3,4)P . Let u ∈ Nr(L(λ)) and v ∈ Nl(L(λ)).
Define ui := (eTi ⊗In)u and vi := (eTi ⊗In)v for i= 1 : 7.
We have c0(σ, σ2) = 4 and i0(σ1, σ) = 2. Hence, by Corollary 2.1.13, (eT7−4⊗In)u= u3 ∈ Nr(P(λ)) and (eT7−2 ⊗In)v =v5 ∈ Nl(P(λ)).
On the other hand,σ2 is a right index tuple of type-1 relative toσ and the simple tuple associated with (σ, σ2) is given byzr(σ, σ2) = (6,5,0 : 4). Thus c0(zr(σ, σ2)) = 4. Hence by Theorem 2.1.18, (eT7−4 ⊗In)u = u3 ∈ Nr(P(λ)). Similarly, rev(σ1) is a right index tuple of type-1 relative to rev(σ) and the simple tuple associated with (rev(σ), rev(σ1)) is zr(rev(σ), rev(σ1)) = (6,5,4,3,0 : 2). Thus c0(zr(rev(σ), rev(σ1))) = 2. Hence by Theorem 2.1.18, (eT7−2⊗In)v =v5 ∈ Nl(P(λ)).
We remark that our recovery rules for right eigenvectors and right minimal bases of P(λ) from those of GFPRs (and hence of FPRs) can be read off fromc0(σ, σ2) whereas the rules for left eigenvectors and left minimal bases can be read off fromi0(σ1, σ). Most importantly,c0(σ, σ2) and i0(σ1, σ) can be easily read off by looking at the index tuples σ, σ2 and σ1. By contrast, the recovery formulae for right (resp., left) eigenvectors and right (resp., left) minimal bases from those of the type-1 FPRs in Theorem 2.1.18 require the number of consecutions of the simple index tuplezr(σ, σ2) (resp.,zr(rev(σ), rev(σ1))) at 0. The simple index tuples zr(σ, σ2) and zr(rev(σ), rev(σ1)) are defined recursively and hence cannot be read off readily from the index tuplesσ, σ2 andσ1. Therefore, even though for certain type-1 FPRs our recovery rules coincide with those in Theorem 2.1.18, our recovery rules are automatic and can be easily read off from the index tuples σ, σ1 and σ2.
Lemma 2.1.21. Let α be an index tuple containing indices from {−m:−1} such that α satisfies the SIP. Then we have the following.
(a) If −s∈α and c−s(α) = p, then α∼ αL,−s,−(s−1), . . . ,−(s−p), αR
for some index tuples αL and αR such that −s /∈αL and −(s−p),−(s−p−1)∈/ αR. (b) if −t ∈ α and i−t(α) =q, then α ∼ αL,−(t−q), . . . ,−(t−1),−t, αR
for some index tuples αL and αR such that −t /∈αR and −(t−q),−(t−q−1)∈/ αL. Proof. Since α satisfies the SIP, we can write α in the column standard form. Let the column standard form of α be given by
csf(α) =
(
(−a1 :−1), . . . ,(−ak−1 :−(k−1)),(−ak:−k), . . . ,(−am :−m))
, where −k is the largest integer such that −s ∈ (−ak : −k). Then (−ak : −k) = (−ak,−ak+ 1, . . . ,−s−1,−s, . . . ,−k). Now, setαL:=
(
(−a1 :−1), . . . ,(−ak−1 :−(k−1)),−ak,−ak+ 1, . . . ,−s−1)
and αR:=
(
(−ak+1 :−(k+ 1)), . . . ,(−am :−m))
. Then csf(α) = αL,−s,−(s−1), . . . ,−k, αR, where −s /∈αL and −k,−(k−1)∈/ αR. Thus (−s,−(s− 1), . . . ,−k) is a subtuple of α and (−s,−(s−1), . . . ,−k,−(k−1)) is not a subtuple of α. So s− k = c−s(α), i.e., s −k = p. Thus k = s − p and
−(s−p),−(s−p−1)∈/ αR. This proves (a) as α∼csf(α).
(b) Let the row standard form of α be given by
rsf(α) =
(
rev(−bm :−m), . . . , rev(−bk:−k), rev(−bk−1 :−(k−1)), . . . , rev(−b1 :−1))
, where −k is the largest integer such that −t ∈ rev(−bk : −k). Then rev(−bk : −k) = (−k, . . . ,−t,−t−1, . . . ,−bk+ 1,−bk). Now, setαL:=
(
rev(−bm :−m), . . . , rev(−bk+1 :−(k+ 1)))
and αR:=
(
−t−1, . . . ,−bk+ 1,−bk, rev(−bk−1 :−(k−1)), . . . , rev(−b1 :−1))
. Then rsf(α) = αL,−k, . . . ,−(t−1),−t, αR, where −t /∈αR and −k,−(k−1)∈/ αL. Thus (−k, . . . ,−(t −1),−t) is a subtuple of α and (−(k − 1),−k, . . . ,−(t −1),−t) is not a subtuple of α. So t − k = i−t(α), i.e., t − k = q. Thus k = t − q and
−(t−q),−(t−q−1)∈/αL. This proves (b) as α∼rsf(α).
We need the following result to derive the recovery of eigenvectors of P(λ) corre- sponding to an eigenvalue at ∞ from those of the GFPRs of P(λ).
Lemma 2.1.22. Let L(λ) := Mτ1(Y1)Mσ1(X1) (λMτP − MσP)Mσ2(X2)Mτ2(Y2) be a GFPR of P(λ). Then we have the following.
(a) (eTc
−m(τ)+1⊗In)Mσ2(X2)Mτ2(Y2) =eTc
−m(τ,τ2)+1⊗In. (b) Mτ1(Y1)Mσ1(X1) (ei−m(τ)+1⊗In) = ei−m(τ1,τ)+1⊗In.
Proof. LetZ be an n×n arbitrary matrix. For j = 1 :m−1, we have
(eTm−j⊗In)M−j(Z) = (eTm−j ⊗In)
I(m−j−1)n
0 In In Z
I(j−1)n
=eTm−(j−1)⊗In
and
M−j(Z)(em−j⊗In) =
I(m−j−1)n
0 In
In Z
I(j−1)n
(em−j ⊗In) = em−(j−1)⊗In.
This shows that (eTm−j ⊗In)M−k(Z) =
eTm−(j−1)⊗In for k =j and j = 1 :m−1, eTm−j ⊗In for k /∈ {j, j+ 1}, j = 0 : m−1,
(2.12) and
M−k(Z)(em−j⊗In) =
em−(j−1)⊗In for k =j and j = 1 :m−1, em−j ⊗In for k /∈ {j, j+ 1}, j = 0 : m−1.
(2.13) (a) Now we show that
(eTc−m(τ)+1⊗In)Mσ2(X2)Mτ2(Y2) =eTc−m(τ,τ2)+1⊗In. (2.14) Since τ is a permutation of {−m : −(h+ 1)}, we have c−m(τ) ≤ m − h − 1 and c−m(τ)+1≤m−h. Further, sinceσ2is an index tuple containing indices from{0 :h−1}, we haveMσ2(X2) = diag(I(m−h)n, ∗). Consequently, we have (eTc
−m(τ)+1⊗In)Mσ2(X2) = eTc
−m(τ)+1⊗In. Hence to prove (2.14) we only need to show that (eTc
−m(τ)+1⊗In)Mτ2(Y2) =
eTc−m(τ,τ
2)+1⊗In. Observe that eTc−m(τ)+1⊗In =eTm−(m−c−m(τ)−1) ⊗In. Set s :=c−m(τ).
We show that
(eTm−(m−s−1)⊗In)Mτ2(Y2) = eTc−m(τ,τ
2)+1⊗In (2.15)
which gives (2.14). Since τ has s consecutions at −m, we have τ ∼
(
τL,−m,−(m− 1), . . . ,−(m−s))
. Note that(τ, τ2)∼
(
τL,−m,−(m−1), . . . ,−(m−s), τ2)
satisfies the SIP. (2.16) Case-I: Suppose that −(m−s−1) ∈/ τ2. Then, since (τ, τ2) satisfies the SIP, it is clear from (2.16) that−(m−s)∈/ τ2. Now, as−(m−s),−(m−s−1)∈/ τ2, by (2.12), we have (eTm−(m−s−1)⊗In)Mτ2(Y2) =eTm−(m−s−1)⊗In =eTs+1⊗In. Since −(m−s−1)∈/ τ2, it is clear from (2.16) that c−m(τ, τ2) = s. This proves (2.15).Case-II: Suppose that −(m − s− 1) ∈ τ2. Set p := c−(m−s−1)(τ2), i.e., τ2 has p consecutions at −(m−s−1). Then by Lemma 2.1.21,
τ2 ∼
(
τ2L,−(m−s−1),−(m−s−2), . . . ,−(m−s−p−1), τ2R)
,where −(m−s−1) ∈/ τ2L and −(m−s−p−1),−(m−s−p−2) ∈/ τ2R. By setting t :=m−s−p−1, we have −t,−(t−1)∈/ τ2R. Note that
(τ, τ2)∼
(
τL,−m:−(m−s), τ2L,−(m−s−1) : −t, τ2R)
satisfies the SIP, (2.17) where −(m−s −1) ∈/ τ2L and −t,−(t−1) ∈/ τ2R. As −(m −s −1) ∈/ τ2L, we have−(m −s) ∈/ τ2L since otherwise (τ, τ2) would not satisfy the SIP which is clear from (2.17). We denote by (∗) any arbitrary matrix assignment. Then
(eTm−(m−s−1)⊗In)MτL
2 (∗)M(−(m−s−1):−t)(∗)MτR
2 (∗) = (eTm−(m−s−1)⊗In)M(−(m−s−1):−t)(∗)MτR
2 (∗)
by (2.12) since
−(m−s),−(m−s−1)∈/ τ2L
= (eTm−(t−1)⊗In)MτR
2 (∗) by applying (2.12) repeatedly
= eTm−(t−1)⊗In by (2.12) as −t,−(t−1)∈/ τ2R.
Hence (eTm−(m−s−1)⊗In)Mτ2(Y2) = eTm−(t−1)⊗In asτ2 ∼ τ2L,−(m−s−1) : −t, τ2R . It is clear from (2.17) that c−m(τ, τ2) =m−t, that is, m−(t−1) = c−m(τ, τ2) + 1. This proves (2.15) and hence (2.14) holds. This completes the proof of (a).
(b) Next, we show that
Mτ1,σ1(Y1, X1) (ei−m(τ)+1⊗In) = ei−m(τ1,τ)+1⊗In. (2.18)
Since τ is a permutation of {−m : −(h + 1)}, we have i−m(τ) ≤ m − h − 1 and i−m(τ)+1 ≤m−h. Further, sinceσ1 is an index tuple containing indices from{0 :h−1}, we haveMσ1(X1) = diag(I(m−h)n, ∗). Consequently, we haveMσ1(X1) (ei−m(τ)+1⊗In) = ei−m(τ)+1⊗In.Thus, to prove (2.18) we only need to show thatMτ1(Y1) (ei−m(τ)+1⊗In) = ei−m(τ1,τ)+1⊗In.
Observe that ei−m(τ)+1⊗In=em−(m−i−m(τ)−1)⊗In. Set s:=i−m(τ).Next, we show that
Mτ1(Y1) (em−(m−s−1)⊗In) =ei−m(τ1,τ)+1⊗In (2.19) which gives (2.18). Sinceτ hass inversions at−m, we haveτ ∼
(
−(m−s), . . . ,−(m− 1),−m, τR)
. Note that(τ1, τ)∼
(
τ1,−(m−s), . . . ,−(m−1),−m, τR)
satisfies the SIP. (2.20) Case-I: Suppose that −(m−s−1) ∈/ τ1. Then, since (τ1, τ) satisfies the SIP, it is clear from (2.20) that−(m−s)∈/ τ1. Now, as−(m−s),−(m−s−1)∈/ τ1, by (2.13), we haveMτ1(Y1) (em−(m−s−1)⊗In) =em−(m−s−1)⊗In =es+1⊗In. Since−(m−s−1)∈/ τ1, it is clear from (2.20) thati−m(τ1, τ) = s. This proves (2.19).Case-II: Suppose that −(m − s− 1) ∈ τ1. Set p := i−(m−s−1)(τ1), i.e., τ1 has p inversions at −(m−s−1). Then by Lemma 2.1.21,
τ1 ∼
(
τ1L,−(m−s−p−1), . . . ,−(m−s−2),−(m−s−1), τ1R)
,where −(m −s −1) ∈/ τ1R and −(m − s− p− 1),−(m −s −p −2) ∈/ τ1L. Setting t:=m−s−p−1, we have −t,−(t−1)∈/ τ1L.Note that
(τ1, τ)∼
(
τ1L,−t, . . . ,−(m−s−1), τ1R,−(m−s), . . . ,−m, τR)
(2.21) satisfies the SIP, where−(m−s−1)∈/ τ1Rand −t,−(t−1)∈/ τ1L.As−(m−s−1)∈/ τ1R, we have −(m−s)∈/ τ1R since otherwise (τ1, τ) would not satisfy the SIP which is clear from (2.21). Now we haveMτL
1 (∗)M(−t,...,−(m−s−2),−(m−s−1))(∗)MτR
1 (∗) (em−(m−s−1)⊗In)
= MτL
1 (∗)M(−t,...,−(m−s−2),−(m−s−1))(∗) (em−(m−s−1)⊗In)
by (2.13) as −(m−s),
−(m−s−1)∈/ τ1R
= MτL
1 (∗) (em−(t−1)⊗In) by applying (2.13) repeatedly
= em−(t−1)⊗In by (2.13) as −t,−(t−1)∈/ τ1L.
Hence Mτ1(Y1) (em−(m−s−1)⊗In) =em−(t−1)⊗In asτ1 ∼ τ1L,−t, . . . ,−(m−s−1), τ1R . It is clear from (2.21) thati−m(τ1, τ) =m−t, that is, m−(t−1) =i−m(τ1, τ) + 1. This proves (2.19) and hence (2.18) holds. This completes the proof of (b).
We are now ready to describe the recovery of eigenvectors ofP(λ) corresponding to an eigenvalue at ∞ from those of the GFPRs ofP(λ).
Theorem 2.1.23 (Eigenvector recovery at ∞). Let P(λ) be regular and let L(λ) :=
Mτ1(Y1)Mσ1(X1)(λMτP −MσP)Mσ2(X2)Mτ2(Y2) be a GFPR of P(λ). Suppose that ∞ is an eigenvalue of P(λ). Then we have the following.
Right eigenvectors. If x1, . . . , xk
is a basis of the right eigenspace ofL(λ)at ∞, then
(
(eTc−m(τ,τ2)+1⊗In)x1, . . . ,(eTc
−m(τ,τ2)+1⊗In)xk
)
is a basis of the right eigenspace of P(λ) at ∞.Left eigenvectors. If y1, . . . , yk
is a basis of the left eigenspace of L(λ) at ∞, then
(
(eTi−m(τ1,τ)+1⊗In)y1, . . . ,(eTi−m(τ
1,τ)+1⊗In)yk
)
is a basis of the left eigenspace of P(λ) at ∞.Proof. We have L(λ) = λL1−L0, where L1 :=M(τ1,σ1)(Y1, X1)MτPM(σ2,τ2)(X2, Y2) and L0 :=M(τ1,σ1)(Y1, X1)MσPM(σ2,τ2)(X2, Y2) . Note that ∞ is an eigenvalue of L(λ)⇐⇒0 is an eigenvalue of rev(L(λ)) ⇐⇒ 0 is an eigenvalue of L1. Since M(τ1,σ1)(Y1, X1) is invertible, we have Nr(L1) = Nr
(
MτPM(σ2,τ2)(X2, Y2))
. Further, sinceM(σ2,τ2)(X2, Y2) is nonsingular, it is easily seen that the map Nr(
MτPM(σ2,τ2)(X2, Y2))
−→ Nr(MτP), z 7→(
M(σ2,τ2)(X2, Y2))
z is an isomorphism. Define T(λ) := λMτP −MσP. Then Nr(MτP) = Nr(
rev(T(0)))
. Now, by Theorem 1.2.28, the mapNr
(
rev(T(0)))
−→ Nr(
rev(P(0)))
, u7→(eTc−m(τ)+1⊗In)u is an isomorphism. Hence the mapNr
(
rev(L(0)))
−→ Nr(
rev(P(0)))
, x7→(
(eTc−m(τ)+1⊗In)M(σ2,τ2)(X2, Y2))
x (2.22) is an isomorphism. Now, by Lemma 2.1.22, we have(eTc−m(τ)+1⊗In)Mσ2(X2)Mτ2(Y2) =eTc−m(τ,τ2)+1⊗In.
Hence the desired result for the right eigenspace of P(λ) at ∞follows from (2.22).
Next, we prove the result for left eigenspace ofP(λ) at ∞. Since M(σ2,τ2)(X2, Y2) is invertible, we have Nl
(
rev(L(0)))
=Nl(L1) =Nl(
M(τ1,σ1)(Y1, X1)MτP)
. Further, since M(τ1,σ1)(Y1, X1) is nonsingular, the map Nl(
M(τ1,σ1)(Y1, X1)MτP)
−→ Nl(MτP), z 7→(
M(τ1,σ1)(Y1, X1))
Tz is an isomorphism. Recall that T(λ) := λMτP −MσP and hence Nl(MτP) =Nl(
rev(T(0)))
. Now, by Theorem 1.2.28, the mapNl
(
rev(T(0)))
−→ Nl(
rev(P(0)))
, v 7→(eTi−m(τ)+1⊗In)vis an isomorphism. Hence the map
Nl
(
rev(L(0)))
−→ Nl(
rev(P(0)))
, y 7→(
(eTi−m(τ)+1⊗In)(M(τ1,σ1)(Y1, X1))T)
y (2.23) is an isomorphism. Now, by Lemma 2.1.22, we have(eTi−m(τ)+1⊗In)
(
M(τ1,σ1)(Y1, X1))
T =(
M(τ1,σ1)(Y1, X1) (ei−m(τ)+1⊗In))
T =eTi−m(τ1,τ)+1⊗In. Hence the result for the left eigenspace of P(λ) at ∞ follows from (2.23).Remark 2.1.24. Notice that the proof of Lemma 2.1.22 and Theorem 2.1.23 does not use the fact that the index tuple (σ1, σ, σ2) satisfies the SIP and hence the results in Theorem 2.1.23 remain valid for GFPR-like pencils of the form
L(λ) := Mτ1(Y1)Mσ1(X1)(λMτP −MσP)Mσ2(X2)Mτ2(Y2)
in which only the index tuple (τ1, τ, τ2) satisfies the SIP but not the tuple (σ1, σ, σ2).
This shows that the SIP of (τ1, τ, τ2) is enough for operation-free recovery of eigenvec- tors of P(λ) corresponding to the eigenvalue ∞ from those of L(λ). Compare this with Remark 2.1.14.
We illustrate the recovery of eigenvectors of P(λ) corresponding to an eigenvalue at
∞ from those of the GFPRs ofP(λ) by considering an example.
Example 2.1.25. Let P(λ) := P5
i=0λiAi. Suppose that P(λ) is regular and ∞ is an eigenvalue of P(λ). Let σ := (0,1), σ1 := ∅, σ2 = (0), τ := (−4,−5,−3,−2), τ2 = (−4,−3)andτ1 =∅. LetX and(Y, Z)be any nonsingular matrix assignments forσ2 and τ2, respectively. Then the GFPRL(λ) = λM(−4,−5,−3,−2)P −M(0,1)P
M0(X)M(−4,−3)(Y, Z) =:
λL1−L0 of P(λ) is given by
L(λ) = λ
0 0 0 In 0
0 0 A5 A4 0 In 0 Y A3 0 0 In Z A2 0
0 0 0 0 X
−
0 0 In 0 0
In 0 Y 0 0
0 In Z 0 0
0 0 0 −A1 X
0 0 0 −A0 0
.
Let x and y, respectively, be right and left eigenvectors of L(λ) corresponding to the eigenvalue ∞. Define xi := (eTi ⊗In)x and yi := (eTi ⊗In)y, i = 1 : 5. We have c−5(τ, τ2) =c−5(−4,−5,−3,−2,−4,−3) = 2. Hence by Theorem 2.1.23, (eTc
−m(τ,τ2)+1⊗ In)x= (eT3⊗In)x=x3 is a right eigenvector ofP(λ)corresponding to the eigenvalue∞.
Similarly, i−5(τ1, τ) =i−5(−4,−5,−3,−2) = 1.Hence by Theorem 2.1.23,(eTi−m(τ
1,τ)+1⊗ In)y = (eT2 ⊗In)y =y2 is a left eigenvector of P(λ) corresponding to the eigenvalue ∞.
To verify the recovery rule, consider L1x= 0. This gives x4 = 0 =x5 and A5x3 = 0.
Further, if x3 = 0 then x1 = 0 = x2. Thus x3 = 0 ⇒ x = 0. Hence x3 6= 0 and is a right eigenvector of P(λ) corresponding to the eigenvalue ∞.
Similarly, yTL1 = 0 implies that yiT = 0 for i = 3,4,5, and yT2A5 = 0. Further, if y2T = 0 then y1T = 0. Thus yT2 ⇒y= 0. Hence yT2 6= 0 and is a left eigenvector of P(λ) corresponding to the eigenvalue ∞.