Vectors and Linear Maps
Definition 2.3.2 A linear map (or transformation) from the complex vec-
2.5 Linear Functionals
Theorem 2.4.7 The vectors{|ei,J|ei}mi=1withN=2mform an orthonor- mal basis for the real vector space V with inner product | R= | . In particular,Vmust be even-dimensional for it to have a complex structureJ.
Definition 2.4.8 IfVis a real vector space, thenC⊗V, together with the complex multiplication rule
α
β⊗ |a
=(αβ)⊗ |a, α, β∈C,
is a complex vector space called thecomplexification of V and denoted complexification byVC. In particular,(Rn)C≡C⊗Rn∼=Cn.
Note that dimCVC = dimRV and dimRVC =2 dimRV. In fact, if {|ak}Nk=1 is a basis of V, then it is also a basis of VC as a complexvec- tor space, while{|ak, i|ak}Nk=1is a basis ofVCas arealvector space.
After complexifying a real vector spaceVwith inner product | R= | , we can define an inner product on it which is sesquilinear (or hermitian) as follows
α⊗a|β⊗b ≡ ¯αβa|b.
It is left to the reader to show that this inner product satisfies all the proper- ties given in Definition2.2.1.
To complexify a real vector spaceV, we have to “multiply” it by the set of complex numbers:VC=C⊗V. As a result, we get arealvector space of twice the original dimension. Is there a reverse process, a “division” of a (necessarily even-dimensional) real vector space? That is, is there a way of getting a complex vector space of half complex dimension, starting with an even-dimensional real vector space?
LetVbe a 2m-dimensional real vector space. LetJbe a complex structure onV, and{|ei,J|ei}mi=1a basis ofV. On the subspaceV1≡Span{|ei}mi=1, define the multiplication by a complex number by
(α+iβ)⊗ |v1 ≡(α1+βJ)|v1, α, β∈R, |v1 ∈V1. (2.22) It is straightforward to show that this process turns the 2m-dimensional real vector spaceVinto them-dimensional complex vector spaceVC
1.
(a) Let|a =(α1, α2, . . . , αn)be inCn. Defineφ:Cn→Cby φ
|a
= n k=1
αk.
Then it is easy to show thatφis a linear functional.
(b) Let μij denote the elements of an m×n matrix M. Define ω: Mm×n→Cby
ω(M)= m i=1
n j=1
μij.
Then it is easy to show thatωis a linear functional.
(c) Letμijdenote the elements of ann×nmatrixM. Defineθ:Mn×n→ Cby
θ(M)= n j=1
μjj,
the sum of the diagonal elements ofM. Then it is routine to show that θis a linear functional.
(d) Define the operatorint:C0(a, b)→Rby
integration is a linear functional on the space of continuous functions int(f )=
b
a
f (t ) dt.
Thenintis a linear functional on the vector spaceC0(a, b).
(e) LetVbe a complex inner product space. Fix|a ∈V, and letγa:V→ Cbe defined by
γa
|b
= a|b. Then one can show thatγais a linear functional.
(f) Let{|a1,|a2, . . . ,|am}be an arbitrary finite set of vectors inV, and {φ1,φ2, . . . ,φm}an arbitrary set of linear functionals onV. Let
A≡ m k=1
|akφk∈End(V)
be defined by A|x =
m k=1
|akφk
|x
= m k=1
φk
|x
|ak.
ThenAis a linear operator onV.
An example of linear isomorphism is that between a vector space and its dual space, which we discuss now. Consider anN-dimensional vector space with a basisB= {|a1,|a2, . . . ,|aN}. For any given set ofNscalars,
{α1, α2, . . . , αN}, define the linear functionalφα byφα|ai =αi. Whenφα acts on any arbitrary vector|b =N
i=1βi|aiinV, the result is φα|b =φα
N
i=1
βi|ai
= N i=1
βiφα|ai = N
i=1
βiαi. (2.23) This expression suggests that|bcan be represented as a column vector with entriesβ1, β2, . . . , βN andφα as a row vector with entriesα1, α2, . . . , αN. Thenφα|bis merely the matrix product14 of the row vector (on the left) and the column vector (on the right).
φα is uniquely determined by the set{α1, α2, . . . , αN}. In other words, corresponding to every set ofN scalars there exists a unique linear func- tional. This leads us to a particular set of functionals,φ1,φ2, . . . ,φN corre- sponding, respectively, to the sets of scalars{1,0,0, . . . ,0},{0,1,0, . . . ,0}, . . . ,{0,0,0, . . . ,1}. This means that
Every set ofNscalars defines a linear functional.
φ1|a1 =1 and φ1|aj =0 forj=1, φ2|a2 =1 and φ2|aj =0 forj=2,
... ... ...
φN|aN =1 and φN|aj =0 forj=N, or that
φi|aj =δij, (2.24)
whereδij is the Kronecker delta.
The functionals of Eq. (2.24) form a basis of the dual spaceV∗. To show this, consider an arbitraryγ∈V∗, which is uniquely determined by its action on the vectors in a basis B= {|a1,|a2, . . . ,|aN}. Let γ|ai =γi ∈C.
Then we claim thatγ =N
i=1γiφi. In fact, consider an arbitrary vector|a inVwith components(α1, α2, . . . , αN)with respect toB. Then, on the one hand,
γ|a =γ N
i=1
αi|ai
= N i=1
αiγ|ai = N i=1
αiγi.
On the other hand, N
i=1
γiφi
|a = N
i=1
γiφi N
j=1
αj|aj
= N i=1
γi N j=1
αjφi|aj = N i=1
γi N j=1
αjδij= N
i=1
γiαi.
14Matrices will be taken up in Chap.5. Here, we assume only a nodding familiarity with elementary matrix operations.
Since the actions ofγ andN
i=1γiφi yield equal results for arbitrary|a, we conclude thatγ=N
i=1γiφi, i.e.,{φi}Ni=1spanV∗. Thus, we have the following result.
Theorem 2.5.2 For every basis B= {|aj}Nj=1 inV,there corresponds a unique basisB∗= {φi}Ni=1inV∗with the property thatφi|aj =δij.
By this theorem the dual space of anN-dimensional vector space is also N-dimensional, and thus isomorphic to it. The basisB∗is called thedual
dual basis basisofB. A corollary to Theorem2.5.2is that to every vector inVthere
corresponds auniquelinear functional inV∗. This can be seen by noting that every vector|ais uniquely determined by its components(α1, α2, . . . , αN) in a basisB. The unique linear functional φa corresponding to |a, also called the dual of|a, is simplyN
i=1αiφi, withφi∈B∗.
Definition 2.5.3 Anannihilator of |a ∈Vis a linear functional φ∈V∗
such thatφ|a =0. LetWbe a subspace ofV. The set of linear functionals inV∗that annihilate all vectors inWis denoted byW0.
The reader may check that W0 is a subspace ofV∗. Moreover, if we extend a basis{|ai}ki=1of Wto a basis B= {|ai}Ni=1of V, then we can
annihilator of a vector and a subspace
show that the functionals{φj}Nj=k+1, chosen from the basisB∗= {φj}Nj=1
dual toB, spanW0. It then follows that
dimV=dimW+dimW0. (2.25)
We shall have occasions to use annihilators later on when we discuss sym- plectic geometry.
We have “dualed” a vector, a basis, and a complete vector space. The only object remaining is a linear transformation.
Definition 2.5.4 LetT:V→Ube a linear map. DefineT∗:U∗→V∗by15
dual, or pull back, of a linear transformation
#T∗(γ)$
|a =γ T|a
∀|a ∈V, γ∈U∗, T∗is called thedualorpullback, ofT.
One can readily verify thatT∗∈L(U∗,V∗), i.e., thatT∗is alinearoper- ator onU∗. Some of the mapping properties ofT∗are tied to those ofT. To see this we first consider the kernel ofT∗. Clearly,γ is in the kernel ofT∗if and only ifγannihilates all vectors of the formT|a, i.e., all vectors inT(V).
It follows thatγ is inT(V)0. In particular, ifTis surjective,T(V)=U, andγ annihilates all vectors inU, i.e., it is the zero linear functional. We conclude that kerT∗=0, and therefore,T∗ is injective. Similarly, one can show that ifTis injective, thenT∗is surjective. We summarize the discussion above:
15Do not confuse this “*” with complex conjugation.
Proposition 2.5.5 Let Tbe a linear transformation and T∗ its pull back.
ThenkerT∗=T(V)0.IfTis surjective(injective),thenT∗is injective(sur- jective).In particular,T∗is an isomorphism ifTis.
It is useful to make a connection between the inner product and linear functionals. To do this, consider a basis{|a1,|a2, . . . ,|aN}and letαi = a|ai. As noted earlier, the set of scalars{αi}Ni=1 defines a unique linear functionalγa (see Example 2.5.1) such thatγa|ai =αi. Sincea|ai is also equal toαi, it is natural to identifyγa with the symbola|, and write γa→ a|.
duals and inner products
It is also convenient to introduce the notation16 |a†
≡ a|, (2.26)
where the symbol † means “dual, or dagger of”. Now we ask: How does this dagger operation act on a linear combination of vectors? Let |c = dagger of a linear
combination of vectors
α|a +β|b and take the inner product of|c with an arbitrary vector|x using linearity in the second factor:x|c =αx|a +βx|b. Now complex conjugate both sides and use the (sesqui)symmetry of the inner product:
(LHS)∗= x|c∗= c|x,
(RHS)∗=α∗x|a∗+β∗x|b∗=α∗a|x +β∗b|x
=
α∗a| +β∗b|
|x.
Since this is true for all|x, we must have (|c)†≡ c| =α∗a| +β∗b|. Therefore, in a duality “operation” the complex scalars must be conjugated.
So, we have
α|a +β|b†
=α∗a| +β∗b|. (2.27) Thus, unlike the association|a →γawhich is linear, the associationγa→ a|is not linear, but sesquilinear:
γαa+βb→α∗a| +β∗b|. It is convenient to represent|a ∈Cnas a column vector
|a =
⎛
⎜⎜
⎜⎝ α1 α2 ... αn
⎞
⎟⎟
⎟⎠.
Then the definition of the complex inner product suggests that the dual of
|amust be represented as a row vector with complex conjugate entries:
a| =
α∗1 α2∗ . . . αn∗
, (2.28)
16The significance of this notation will become clear in Sect.4.3.
and the inner product can be written as the (matrix) product
a|b =
α∗1 α2∗ · · · α∗n
⎛
⎜⎜
⎜⎝ β1 β2
... βn
⎞
⎟⎟
⎟⎠= n i=1
αi∗βi.
Compare (2.28) with the comments after (2.23).
The complex
conjugation in (2.28) is the result of the sesquilinearity of the association|a ↔ a|.