• Tidak ada hasil yang ditemukan

Eigenvalues and Eigenvectors

Dalam dokumen Alessandro Zampini - Giovanni Landi (Halaman 141-146)

Remark 9.2.1 Letφ : VV be a simple endomorphism, with=MφC,Ca diag- onal matrix associated toφ. It is then

=

⎜⎜

⎜⎝

λ1 0 0· · · 0 0 λ20 · · · 0 ... ... ... ...

0 0 0· · ·λn

⎟⎟

⎟⎠,

for scalarsλj ∈R, with j=1, . . . ,n. By setting C=(v1, . . . , vn), we write then φ(vj) = λjvj.

134 9 Endomorphisms and Diagonalization The vectors of the basisCand the scalarsλjplays a prominent role in the analysis of endomorphisms. This motivates the following definition.

Definition 9.2.2 Letφ ∈ End(V)withV a real vector space. If there exists a non zero vectorvV and a scalarλ ∈ R, such that

φ(v) = λv,

thenλis called aneigenvalueofφandvis called aneigenvectorofφassociated to λ. Thespectrumof an endomorphism is the collection of its eigenvalues.

Remark 9.2.3 Letφ ∈ End(V)andC be a basis ofV. With the definition above, the content of the Remark9.2.1can be rephrased as follow:

(a) MφC,Cis diagonal if and only ifCis a basis of eigenvectors forφ,

(b) φis simple if and only ifV has a basis of eigenvectors forφ(from the Defini- tion9.1.7).

Notice that each eigenvectorvfor an endomorphismφis uniquely associated to an eigenvalueλofφ. On the other hand, more than one eigenvector can be associated to a given eigenvalueλ. It is indeed easy to see that, ifvis associated toλ, alsoαv, withα ∈ R, is associated to the sameλsinceφ(αv) = αφ(v) = α(λv) = λ(αv).

Proposition 9.2.4 If V is a real vector space , andφ ∈ End(V), the set Vλ = {vV : φ(v)=λv}

is a vector subspace in V .

Proof We explicitly check thatVλis closed under linear combinations. Withv1, v2Vλanda1,a2 ∈ R, we can write

φ(a1v1+a2v2)=a1φ(v1)+a2φ(v2)=a1λv1+a2λv2=λ(a1v1+a2v2),

showing thatVλis a vector subspace ofV

Definition 9.2.5 Ifλ∈Ris an eigenvalue ofφ ∈ End(V), the spaceVλis called theeigenspacecorresponding toλ.

Remark 9.2.6 It is easy to see that ifλ∈Ris not an eigenvalue for the endomorphism φ, then the setVλ= {vV|φ(v)=λv}contains only the zero vector. It is indeed clear that, ifVλcontains the zero vector only, thenλis not an eigenvalue forφ. We have thatλ ∈ Ris an eigenvalue forφif and only if dim(Vλ) ≥ 1.

Exercise 9.2.7 Let φ ∈ End(R2) be defined by φ((x,y))=(y,x). Isλ=2 an eigenvalue forφ? The corresponding setV2would then be

V2 = {v∈R2 : φ(v)=2v} = {(x,y)∈R2 : (y,x)=2(x,y)},

that is,V2would be given by the solutions of the system y=2x

x=2y

y=2x x=4x

x=0 y=0 . SinceV2 = {(0,0)}, we conclude thatλ=2 isnotan eigenvalue forφ.

Exercise 9.2.8 The endomorphismφ ∈ End(R2)given byφ((x,y))=(2x,3y)is simple since the corresponding matrix with respect to the canonical basisE=(e1,e2) is diagonal,

MφE,E = 2 0 0 3

.

Its eigenvalues areλ1=2 (with eigenvectore1) andλ2=3 (with eigenvectore2).

The corresponding eigenspaces are thenV2 = L(e1)andV3 = L(e2).

Exercise 9.2.9 We consider again the endomorphismφ((x,y))=(y,x)inR2given in the Exercise9.2.7. We wonder whether it is simple. We start by noticing that its corresponding matrix with respect to the canonical basis is the following,

MφE,E = 0 1 1 0

,

which is not diagonal. We look then for a basis (if it exists) with respect to which the matrix corresponding toφis diagonal. By recalling the Remark9.2.3we look for a basis ofR2 made up of eigenvectors forφ. In order forv=(a,b)to be an eigenvector forφ, there must exist a real scalarλsuch thatφ((a,b)) = λ(a,b),

b=λa a =λb.

It follows that the eigenvalues, if they exist, must fulfill the conditionλ2 =1. For λ=1 the corresponding eigenspace is

V1 = {(x,y)∈R2 : φ((x,y))=(x,y)} = {(x,x)∈R2} = L((1,1)).

And forλ= −1 the corresponding eigenspace is

V−1 = {(x,y)∈R2 : φ((x,y)) = −(x,y)} = {(x,x)∈R2} = L((1,−1)).

Since the vectors(1,1), (1,−1)form a basisB forR2 with respect to which the matrix ofφis

MφB,B = 1 0 0−1

,

136 9 Endomorphisms and Diagonalization we conclude thatφis simple. We expectMφB,BMφE,E, since they are associated to the same endomorphism; the algebraic proof of this claim is easy. By defining

P =ME,B = 1 1 1−1

the matrix of the change of basis, we compute explicitly, 1 1

1−1 1

0 1 1 0

1 1 1−1

= 1 0 0 −1

,

that is P1MφE,EP = MφB,B(see the Proposition9.1.3).

Not any endomorphism is simple as the following exercise shows.

Exercise 9.2.10 The endomorphism inR2 defined asφ((x,y))=(y,x)is not simple. Forv =(a,b)to be an eigenvector,φ((a,b))=λ(a,b)it would be equiv- alent to (b,a)=λ(a,b), leading to λ2 = −1. The only solution in R is then a =b=0, showing thatφis not simple.

Proposition 9.2.11 Let V be a real vector space with φ ∈ End(V). If λ12 are distinct eigenvalues, any two corresponding eigenvectors,0 = v1Vλ1 and 0 = v2Vλ2, are linearly independent. Also, the sum Vλ1+Vλ2is direct.

Proof Let us assume thatv2=αv1, withR α = 0. By applying the linear map φto both members, we haveφ(v2)=αφ(v1). Sincev1andv2are eigenvectors with eigenvaluesλ1andλ2,

φ(v1) = λ1v1

φ(v2) = λ2v2

and the relationφ(v2)=αφ(v1), usingv2=αv1become λ2v2=α(λ1v1)=λ1(αv1)=λ1v2, that is

2λ1)v2=0V.

Sinceλ2 =λ1, this would lead to the contradictionv2=0V. We therefore conclude thatv1andv2are linearly independent.

For the last claim we use the Proposition2.2.13and show thatVλ1Vλ2 = {0V}. IfvVλ1Vλ2, we could write bothφ(v)=λ1v(sincevVλ1) andφ(v)=λ2v (since vVλ2): it would then be λ1v=λ2v, that is 1λ2)v=0V. From the

hypothesisλ1 =λ2, we would getv=0V.

The following proposition is proven along the same lines.

Proposition 9.2.12 Let V be a real vector space, withφ∈End(V). Let λ1, . . . , λs ∈Rbe distinct eigenvalues ofφwith0V =vjVλj, j =1, . . . ,s corresponding eigenvectors. The set{v1, . . . , vs}is free, and the sum Vλ1+ · · · +Vλsis direct.

Corollary 9.2.13 Ifφis an endomorphism of the real vector space V , withdim(V)= n, thenφhas at most n distinct eigenvalues.

Proof Ifφhads>ndistinct eigenvalues, there would exist a setv1, . . . , vsof non zero corresponding eigenvectors. From the proposition above, such a system should be free, thus contradicting the fact that the dimension ofV isn.

Remark 9.2.14 Letφ andψ be two commuting endomorphisms, that is they are such that φ(ψ(w))=ψ(φ(w)) for anyvV. IfvVλ is an eigenvector for φ corresponding toλ, it follows that

φ(ψ(v)) = ψ(φ(v)) = λ ψ(v).

Thus the endomorphismψmaps any eigenspaceVλofφinto itself, and analogously φpreserves any eigenspaceVλofψ.

Finding the eigenspaces of an endomorphism amounts to compute suitable ker- nels. Let f : VWbe a linear map between real vector spaces with basesBand C. We recall (see Proposition7.5.1) that ifA = MC,Bf and : A X=0 is the linear system associated to A, the mapS → ker(f)given by

(x1, . . . ,xn)(x1, . . . ,xn)B

is an isomorphism of vector spaces.

Lemma 9.2.15 If V is a real vector space with basisB, letφ ∈ End(V)andλ∈R.

Then

Vλ = kerλidV)∼=Sλ,

where Sλ is the space of the solutions of the linear homogeneous system Sλ :

MφB,BλIn

X = 0.

Proof From the Definition9.2.4we write

Vλ= {vV : φ(v)=λv}

= {vV : φ(v)λv=0V}

= kerλidV).

Such a kernel is isomorphic (as recalled above) to the space of solutions of the linear system given by the matrixMφB,BλidV, whereBis an arbitrary basis ofV. We conclude

by noticing thatMφB,BλidV = MφB,BλIn.

138 9 Endomorphisms and Diagonalization Proposition 9.2.16 Letφ ∈ End(V)be an endomorphism of the real vector space V , withdim(V)=n, and letλ ∈ R. The following are equivalent:

(i) λis an eigenvalue forφ, (ii) dim(Vλ) ≥ 1,

(iii) det(MφB,BλIn)=0for any basisBin V . Proof (i) ⇔(ii) is the content of the Remark9.2.6;

(ii) ⇔ (iii). Let B be an arbitrary basis of V, and consider the linear system Sλ :

MφB,BλIn

X =0. We have

dim(Vλ)= dim(Sλ)

= n − rk

MφB,BλIn

;

the first and the second equality follow from Definition6.2.1and Theorem6.4.3 respectively. From Proposition5.3.1we finally write

dim(Vλ)≥1 ⇔ rk

MφB,BλIn

<n ⇔ det

MφB,BλIn

=0,

which concludes the proof.

This proposition shows that the computation of an eigenspace reduces to finding the kernel of a linear map, a computation which has been described in the Proposi- tion7.5.1.

Dalam dokumen Alessandro Zampini - Giovanni Landi (Halaman 141-146)