• Tidak ada hasil yang ditemukan

Bases and Dimension of Vector Spaces

Dalam dokumen Jörg Liesen Volker Mehrmann (Halaman 122-128)

118 9 Vector Spaces is called alinear combinationof v1, . . . , vn with thecoefficientsλ1, . . . ,λnK. The(linear) spanofv1, . . . , vnis the set

span{v1, . . . , vn} :=

n

i=1

λivi1, . . . ,λnK

.

LetM be a set and suppose that for everymM we have a vectorvm∈V. Let the set of all these vectors, called thesystemof these vectors, be denoted by{vm}mM. Then the(linear) spanof the system{vm}mM, denoted by span{vm}mM, is defined as the set of all vectorsv ∈Vthat are linear combinations of finitely many vectors of the system.

This definition can be consistently extended to the case n = 0. In this case v1, . . . , vn is a list of length zero, or anempty list. If we define the empty sum of vectors as 0∈V, then we obtain span{v1, . . . , vn} =span Ø= {0}.

If in the following we consider a list of vectors v1, . . . , vn or a set of vectors {v1, . . . , vn}, we usually mean thatn≥1. The case of empty list and the associated zero vector spaceV = {0}will sometimes be discussed separately.

Example 9.8 The vector spaceK1,3 = {[α123] |α123K}is spanned by the vectors[1,0,0],[0,1,0],[0,0,1]. The set{[α12,0] |α12K}forms a subspace of K1,3that is spanned by the vectors[1,0,0],[0,1,0].

Lemma 9.9 IfV is a vector space andv1, . . . , vn ∈ V, thenspan{v1, . . . , vn}is a subspace ofV.

Proof It is clear that Ø=span{v1, . . . , vn} ⊆V. Furthermore, span{v1, . . . , vn}is by definition closed with respect to addition and scalar multiplication, so that (1) and

(2) in Lemma9.5are satisfied. ⊓⊔

(2) The empty list is linear independent.

(3) IfM is a set and for everymMwe have a vectorvm∈V, the corresponding system{vm}mM is calledlinearly independentwhen finitely many vectors of the system are always linearly independent in the sense of (1). Otherwise the system is calledlinearly dependent.

The vectorsv1, . . . , vnare linearly independent if and only if the zero vector can be linearly combined only in the trivial way 0=0·v1+. . .+0·vn. Consequently, if one of these vectors is the zero vector, thenv1, . . . , vnare linearly dependent. A single vectorvis linearly independent if and only ifv=0.

The following result gives a useful characterization of the linear independence of finitely many (but at least two) given vectors.

Lemma 9.11 The vectorsv1, . . . , vn, n≥2, are linearly independent if and only if no vectorvi, i=1, . . . ,n, can be written as a linear combination of the others.

Proof We prove the assertion by contraposition. The vectorsv1, . . . , vnare linearly dependent if and only if

n i=1

λivi =0

with at least one scalarλj=0. Equivalently,

vj = − n

i=1 i=j

−1j λi) vi,

so thatvjis a linear combination of the other vectors. ⊓⊔ Using the concept of linear independence we can now define the concept of the basis of a vector space.

Definition 9.12 LetVbe a vector space.

(1) A set {v1, . . . , vn} ⊆ V is called a basisof V, whenv1, . . . , vn are linearly independent and span{v1, . . . , vn} =V.

(2) The set Ø is the basis of the zero vector spaceV = {0}.

(3) LetMbe a set and suppose that for everymMwe have a vectorvm∈V. The set{vm|mM}is called abasisofVif the corresponding system{vm}mMis linearly independent and span{vm}mM =V.

In short, a basis is alinearly independent spanning setof a vector space.

Example 9.13

(1) LetEi jKn,mbe the matrix with entry 1 in position(i,j)and all other entries 0 (cp. Sect.5.1). Then the set

120 9 Vector Spaces

{Ei j |1≤in and 1≤ jm} (9.1) is a basis of the vector spaceKn,m(cp. (1) in Example9.2): The matricesEi jKn,m, 1≤inand 1≤ jm, are linearly independent, since

0= n

i=1

m j=1

λi jEi j = [λi j]

implies thatλi j =0 fori =1, . . . ,nand j =1, . . . ,m. For anyA= [ai j] ∈ Kn,mwe have

A= n

i=1

m j=1

ai jEi j,

and hence

span{Ei j |1≤in and 1≤ jm} = Kn,m.

The basis (9.1) is called thecanonicalor standardbasis of the vector space Kn,m. Form=1 we denote the canonical basis vectors ofKn,1by

e1:=

⎢⎢

⎢⎢

⎢⎣ 1 0 0 ... 0

⎥⎥

⎥⎥

⎥⎦ , e2:=

⎢⎢

⎢⎢

⎢⎣ 0 1 0 ... 0

⎥⎥

⎥⎥

⎥⎦

, . . . , en:=

⎢⎢

⎢⎢

⎢⎣ 0

... 0 0 1

⎥⎥

⎥⎥

⎥⎦ .

These vectors are also calledunit vectors; they are thencolumns of the identity matrixIn.

(2) A basis of the vector space K[t](cp. (2) in Example9.2) is given by the set {tm|m∈N0}, since the corresponding system{tm}m∈N0is linearly independent, and every polynomialpK[t]is a linear combination of finitely many vectors of the system.

The next result is called thebasis extension theorem.

Theorem 9.14 LetVbe a vector space and letv1, . . . , vr, w1, . . . , w∈V, where r, ℓ∈N0. Ifv1, . . . , vrare linearly independent andspan{v1, . . . , vr, w1, . . . , w} = V, then the set{v1, . . . , vr}can be extended to a basis ofV using vectors from the set{w1, . . . , w}.

Proof Note that forr =0 the listv1, . . . , vris empty and hence linearly independent due to (2) in Definition9.10.

We prove the assertion by induction onℓ. Ifℓ=0, then span{v1, . . . , vr} = V, and the linear independence of{v1, . . . , vr}shows that this set is a basis ofV.

Let the assertion hold for someℓ≥0. Suppose thatv1, . . . , vr, w1, . . . , wℓ+1∈V are given, wherev1, . . . , vr are linearly independent and span{v1, . . . , vr, w1, . . . , wℓ+1} = V. If {v1, . . . , vr} already is a basis ofV, then we are done. Suppose, therefore, that span{v1, . . . , vr} ⊂V. Then there exists at least onej, 1≤ j ≤ℓ+1, such thatwj ∈/span{v1, . . . , vr}. In particular, we havewj =0. Then

λwj+ r

i=1

λivi =0

implies that λ = 0 (otherwise we would have wj ∈ span{v1, . . . , vr}) and, therefore, λ1 = · · · = λr = 0 due to the linear independence of v1, . . . , vr. Thus, v1, . . . , vr, wj are linearly independent. By the induction hypothesis we can extend the set {v1, . . . , vr, wj} to a basis of V using vectors from the set

{w1, . . . , wℓ+1} \ {wj}, which containsℓelements. ⊓⊔

Example 9.15 Consider the vector spaceV =K[t]≤3(cp. (3) in Example9.6) and the vectors v1 = t, v2 = t2,v3 = t3. These vectors are linearly independent, but{v1, v2, v3}is not a basis ofV, since span{v1, v2, v3} = V. For example, the vectors w1 = t2+1 and w2 = t3t2 −1 are elements of V, but w1, w2 ∈/ span{v1, v2, v3}. We have span{v1, v2, v3, w1, w2} =V. If we extend{v1, v2, v3}by w1, then we get the linearly independent vectorsv1, v2, v3, w1which indeed spanV. Thus,{v1, v2, v3, w1}is a basis ofV.

By the basis extension theorem every vector space that is spanned by finitely many vectors has a basis consisting of finitely many elements. A central result of the theory of vector spaces is that every such basis has the same number of elements. In order to show this result we first prove the followingexchange lemma.

Lemma 9.16 LetVbe a vector space, letv1, . . . , vm∈Vand letw=m i=1λivi∈ Vwithλ1=0. Thenspan{w, v2, . . . , vm} =span{v1, v2, . . . , vm}.

Proof By assumption we have

v1−11 w− m

i=2

λ−11 λi

vi.

Ify∈span{v1, . . . , vm}, sayy=m

i=1γivi, then y1

λ−11 w−

m i=2

λ−11 λi

vi

+

m i=2

γivi

= γ1λ−11

w +

m i=2

γi−γ1λ−11 λi

vi ∈ span{w, v2, . . . , vm}.

If, on the other hand,y1w+m

i=2αivi∈span{w, v2, . . . , vm}, then

122 9 Vector Spaces

y1

m

i=1

λivi

+

m i=2

αivi

1λ1v1 + m i=2

1λii) vi ∈ span{v1, . . . , vm},

and thus span{w, v2, . . . , vm} =span{v1, v2, . . . , vm}. ⊓⊔ Using this lemma we now prove theexchange theorem.2

Theorem 9.17 Let W = {w1, . . . , wn}and U = {u1, . . . ,um}be finite subsets of a vector space, and letw1, . . . , wnbe linearly independent. If W ⊆span{u1, . . . ,um}, then nm, and n elements of U , if numbered appropriately the elements u1, . . . ,un, can be exchanged against n elements of W in such a way that

span{w1, . . . , wn,un+1, . . . ,um} =span{u1, . . . ,un,un+1, . . . ,um}.

Proof By assumption we havew1 = m

i=1λiui for some scalars λ1, . . . ,λmthat are not all zero (otherwisew1 = 0, which contradicts the linear independence of w1, . . . , wn). After an appropriate renumbering we haveλ1 =0, and Lemma9.16 yields

span{w1,u2, . . . ,um} =span{u1,u2, . . . ,um}.

Suppose that for somer, 1≤rn−1, we have exchanged the vectorsu1, . . . ,ur

againstw1, . . . , wrso that

span{w1, . . . , wr,ur+1, . . . ,um} =span{u1, . . . ,ur,ur+1, . . . ,um}.

It is then clear thatrm.

By assumption we havewr+1∈span{u1, . . . ,um}, and thus

wr+1= r

i=1

λiwi + m i=r+1

λiui

for some scalarsλ1, . . . ,λm. One of the scalarsλr+1, . . . ,λmmust be nonzero (oth- erwise wr+1 ∈ span{w1, . . . , wr}, which contradicts the linear independence of w1, . . . , wm). After an appropriate renumbering we haveλr+1=0, and Lemma9.16 yields

span{w1, . . . , wr+1,ur+2, . . . ,um} =span{w1, . . . , wr,ur+1, . . . ,um}.

If we continue this construction untilr =n−1, then we obtain

2In the literature, his theorem is sometimes called theSteinitz exchange theoremafter Ernst Steinitz (1871–1928). The result was first proved in 1862 by Hermann Günther Graßmann (1809–1877).

span{w1, . . . , wn,un+1, . . . ,um} =span{u1, . . . ,un,un+1, . . .um},

where in particularnm. ⊓⊔

Using this fundamental theorem, the following result about the unique number of basis elements is a simple corollary.

Corollary 9.18 If a vector spaceVis spanned by finitely many vectors, thenVhas a basis consisting of finitely many elements, and any two bases ofV have the same number of elements.

Proof The assertion is clear for V = {0} (cp. (2) in Definition9.12). Let V = span{v1, . . . , vm}withv1 = 0. By Theorem9.14, we can extend span{v1}using elements of {v2, . . . , vm}to a basis ofV. Thus,V has a basis with finitely many elements. LetU := {u1, . . . ,u}andW := {w1, . . . , wk}be two such bases. Then

W ⊆V=span{u1, . . . ,u}Theorem 9.18

=⇒ k≤ℓ, U ⊆V=span{w1, . . . , wk}Theorem 9.18

=⇒ ℓ≤k,

and thusℓ=k. ⊓⊔

We can now define the dimension of a vector space.

Definition 9.19 If there exists a basis of aK-vector spaceVthat consists of finitely many elements, thenVis calledfinite dimensional, and the unique number of basis elements is called the dimension of V. We denote the dimension by dimK(V)or dim(V), if it is clear which field is meant.

IfVis not spanned by finitely many vectors, thenVis calledinfinite dimensional, and we write dimK(V)= ∞.

Note that the zero vector spaceV= {0}has the basis Ø and thus it has dimension zero (cp. (2) in Definition9.12).

IfVis a finite dimensional vector space and ifv1, . . . , vm∈Vwithm>dim(V), then the vectorsv1, . . . , vmmust be linearly dependent. (If these vectors were linearly independent, then we could extend them via Theorem9.14to a basis ofVthat would contain more than dim(V)elements.)

Example 9.20 The set in (9.1) forms a basis of the vector spaceKn,m. This basis has n·melements, and hence dim(Kn,m)=n·m. On the other hand, the vector space K[t]is not spanned by finitely many vectors (cp. (2) in Example9.13) and hence it is infinite dimensional.

Example 9.21 LetVbe the vector space of continuous and real valued functions on the real interval[0,1](cp. (3) in Example9.2). Define forn=1,2, . . .the function

fn ∈Vby

124 9 Vector Spaces

fn(x)=

⎧⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎩

0, x< n+11 , 0, n1 <x,

2n(n+1)x−2n, n+11x121

n +n+11 ,

−2n(n+1)x+2n+2, 121

n +n+11

<x1n.

Every linear combination k j=1

λjfj is a continuous function that has the value λj

at 21

1

j + j+11

. Thus, the equation k j=1

λjfj = 0 ∈ V implies that allλj must be zero, so that f1, . . . ,fk ∈ V are linearly independent for allk∈ N. Consequently, dim(V)= ∞.

Dalam dokumen Jörg Liesen Volker Mehrmann (Halaman 122-128)